2026-04-07 00:00:07.424529 | Job console starting 2026-04-07 00:00:07.460724 | Updating git repos 2026-04-07 00:00:07.603573 | Cloning repos into workspace 2026-04-07 00:00:08.028312 | Restoring repo states 2026-04-07 00:00:08.058098 | Merging changes 2026-04-07 00:00:08.058299 | Checking out repos 2026-04-07 00:00:08.729871 | Preparing playbooks 2026-04-07 00:00:09.789072 | Running Ansible setup 2026-04-07 00:00:17.599523 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 00:00:19.642500 | 2026-04-07 00:00:19.642628 | PLAY [Base pre] 2026-04-07 00:00:19.675920 | 2026-04-07 00:00:19.676043 | TASK [Setup log path fact] 2026-04-07 00:00:19.729846 | orchestrator | ok 2026-04-07 00:00:19.773942 | 2026-04-07 00:00:19.774061 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 00:00:19.866242 | orchestrator | ok 2026-04-07 00:00:19.903660 | 2026-04-07 00:00:19.903767 | TASK [emit-job-header : Print job information] 2026-04-07 00:00:20.015549 | # Job Information 2026-04-07 00:00:20.015698 | Ansible Version: 2.16.14 2026-04-07 00:00:20.015728 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-07 00:00:20.015756 | Pipeline: periodic-midnight 2026-04-07 00:00:20.015775 | Executor: 521e9411259a 2026-04-07 00:00:20.015793 | Triggered by: https://github.com/osism/testbed 2026-04-07 00:00:20.015811 | Event ID: 319058190fd34c37a7841e4813e72f7e 2026-04-07 00:00:20.027110 | 2026-04-07 00:00:20.027210 | LOOP [emit-job-header : Print node information] 2026-04-07 00:00:20.172705 | orchestrator | ok: 2026-04-07 00:00:20.173023 | orchestrator | # Node Information 2026-04-07 00:00:20.173071 | orchestrator | Inventory Hostname: orchestrator 2026-04-07 00:00:20.173098 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-07 00:00:20.173120 | orchestrator | Username: zuul-testbed05 2026-04-07 00:00:20.173141 | orchestrator | Distro: Debian 12.13 2026-04-07 00:00:20.173164 | orchestrator | Provider: static-testbed 2026-04-07 00:00:20.173185 | orchestrator | Region: 2026-04-07 00:00:20.173206 | orchestrator | Label: testbed-orchestrator 2026-04-07 00:00:20.173225 | orchestrator | Product Name: OpenStack Nova 2026-04-07 00:00:20.173244 | orchestrator | Interface IP: 81.163.193.140 2026-04-07 00:00:20.192397 | 2026-04-07 00:00:20.192513 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-07 00:00:21.206466 | orchestrator -> localhost | changed 2026-04-07 00:00:21.212791 | 2026-04-07 00:00:21.212884 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-07 00:00:22.814472 | orchestrator -> localhost | changed 2026-04-07 00:00:22.833857 | 2026-04-07 00:00:22.833954 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-07 00:00:23.625494 | orchestrator -> localhost | ok 2026-04-07 00:00:23.631400 | 2026-04-07 00:00:23.631507 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-07 00:00:23.682639 | orchestrator | ok 2026-04-07 00:00:23.761604 | orchestrator | included: /var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-07 00:00:23.792377 | 2026-04-07 00:00:23.792491 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-07 00:00:27.705322 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-07 00:00:27.705503 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/f25c0f5a182f4bef9ec88cb98187e293_id_rsa 2026-04-07 00:00:27.705537 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/f25c0f5a182f4bef9ec88cb98187e293_id_rsa.pub 2026-04-07 00:00:27.705559 | orchestrator -> localhost | The key fingerprint is: 2026-04-07 00:00:27.705578 | orchestrator -> localhost | SHA256:B7IDGSb5yu2ysRhHfOsReVNgaLC+fTbvWpoY6gH8ahk zuul-build-sshkey 2026-04-07 00:00:27.705596 | orchestrator -> localhost | The key's randomart image is: 2026-04-07 00:00:27.705623 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-07 00:00:27.705640 | orchestrator -> localhost | | ooo.o | 2026-04-07 00:00:27.705658 | orchestrator -> localhost | | .+o+ . | 2026-04-07 00:00:27.705674 | orchestrator -> localhost | | .oo . o | 2026-04-07 00:00:27.705690 | orchestrator -> localhost | |.o .o + . | 2026-04-07 00:00:27.705706 | orchestrator -> localhost | |.o+o+ = S . | 2026-04-07 00:00:27.705726 | orchestrator -> localhost | | E++.+ o . | 2026-04-07 00:00:27.705743 | orchestrator -> localhost | |. Bo= + . | 2026-04-07 00:00:27.705760 | orchestrator -> localhost | | *o*.* * | 2026-04-07 00:00:27.705776 | orchestrator -> localhost | |oo=oo +oo | 2026-04-07 00:00:27.705793 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-07 00:00:27.705845 | orchestrator -> localhost | ok: Runtime: 0:00:02.270016 2026-04-07 00:00:27.711878 | 2026-04-07 00:00:27.711967 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-07 00:00:27.770277 | orchestrator | ok 2026-04-07 00:00:27.792628 | orchestrator | included: /var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-07 00:00:27.819846 | 2026-04-07 00:00:27.819949 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-07 00:00:27.854681 | orchestrator | skipping: Conditional result was False 2026-04-07 00:00:27.862435 | 2026-04-07 00:00:27.862542 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-07 00:00:28.773822 | orchestrator | changed 2026-04-07 00:00:28.795717 | 2026-04-07 00:00:28.795815 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-07 00:00:29.131488 | orchestrator | ok 2026-04-07 00:00:29.144004 | 2026-04-07 00:00:29.144112 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-07 00:00:29.591527 | orchestrator | ok 2026-04-07 00:00:29.602074 | 2026-04-07 00:00:29.602157 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-07 00:00:30.091322 | orchestrator | ok 2026-04-07 00:00:30.096371 | 2026-04-07 00:00:30.096463 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-07 00:00:30.191641 | orchestrator | skipping: Conditional result was False 2026-04-07 00:00:30.197379 | 2026-04-07 00:00:30.197479 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-07 00:00:31.817012 | orchestrator -> localhost | changed 2026-04-07 00:00:31.836613 | 2026-04-07 00:00:31.836712 | TASK [add-build-sshkey : Add back temp key] 2026-04-07 00:00:32.707362 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/f25c0f5a182f4bef9ec88cb98187e293_id_rsa (zuul-build-sshkey) 2026-04-07 00:00:32.707596 | orchestrator -> localhost | ok: Runtime: 0:00:00.028655 2026-04-07 00:00:32.714581 | 2026-04-07 00:00:32.714678 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-07 00:00:33.256662 | orchestrator | ok 2026-04-07 00:00:33.276637 | 2026-04-07 00:00:33.276745 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-07 00:00:33.319885 | orchestrator | skipping: Conditional result was False 2026-04-07 00:00:33.479488 | 2026-04-07 00:00:33.479603 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-07 00:00:34.136456 | orchestrator | ok 2026-04-07 00:00:34.159831 | 2026-04-07 00:00:34.159937 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-07 00:00:34.197779 | orchestrator | ok 2026-04-07 00:00:34.203559 | 2026-04-07 00:00:34.203647 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-07 00:00:35.106717 | orchestrator -> localhost | ok 2026-04-07 00:00:35.117677 | 2026-04-07 00:00:35.117770 | TASK [validate-host : Collect information about the host] 2026-04-07 00:00:36.364393 | orchestrator | ok 2026-04-07 00:00:36.406371 | 2026-04-07 00:00:36.406492 | TASK [validate-host : Sanitize hostname] 2026-04-07 00:00:36.580087 | orchestrator | ok 2026-04-07 00:00:36.591251 | 2026-04-07 00:00:36.591342 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-07 00:00:38.208165 | orchestrator -> localhost | changed 2026-04-07 00:00:38.213158 | 2026-04-07 00:00:38.213239 | TASK [validate-host : Collect information about zuul worker] 2026-04-07 00:00:39.404853 | orchestrator | ok 2026-04-07 00:00:39.409290 | 2026-04-07 00:00:39.409373 | TASK [validate-host : Write out all zuul information for each host] 2026-04-07 00:00:40.694998 | orchestrator -> localhost | changed 2026-04-07 00:00:40.703466 | 2026-04-07 00:00:40.703548 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-07 00:00:41.038564 | orchestrator | ok 2026-04-07 00:00:41.050749 | 2026-04-07 00:00:41.050927 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-07 00:02:10.456327 | orchestrator | changed: 2026-04-07 00:02:10.456579 | orchestrator | .d..t...... src/ 2026-04-07 00:02:10.456615 | orchestrator | .d..t...... src/github.com/ 2026-04-07 00:02:10.456640 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-07 00:02:10.456662 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-07 00:02:10.456682 | orchestrator | RedHat.yml 2026-04-07 00:02:10.472229 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-07 00:02:10.472247 | orchestrator | RedHat.yml 2026-04-07 00:02:10.472300 | orchestrator | = 1.53.0"... 2026-04-07 00:02:22.596546 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-07 00:02:22.743956 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-07 00:02:23.228138 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-07 00:02:23.541662 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-07 00:02:24.419791 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-07 00:02:24.488034 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-07 00:02:25.030354 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-07 00:02:25.030412 | orchestrator | 2026-04-07 00:02:25.030419 | orchestrator | Providers are signed by their developers. 2026-04-07 00:02:25.030424 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-07 00:02:25.030428 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-07 00:02:25.030435 | orchestrator | 2026-04-07 00:02:25.030439 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-07 00:02:25.030444 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-07 00:02:25.030457 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-07 00:02:25.030461 | orchestrator | you run "tofu init" in the future. 2026-04-07 00:02:25.362064 | orchestrator | 2026-04-07 00:02:25.362131 | orchestrator | OpenTofu has been successfully initialized! 2026-04-07 00:02:25.362147 | orchestrator | 2026-04-07 00:02:25.362160 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-07 00:02:25.362173 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-07 00:02:25.362184 | orchestrator | should now work. 2026-04-07 00:02:25.362197 | orchestrator | 2026-04-07 00:02:25.362208 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-07 00:02:25.362220 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-07 00:02:25.362233 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-07 00:02:25.512015 | orchestrator | Created and switched to workspace "ci"! 2026-04-07 00:02:25.512139 | orchestrator | 2026-04-07 00:02:25.512157 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-07 00:02:25.512171 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-07 00:02:25.512184 | orchestrator | for this configuration. 2026-04-07 00:02:25.655728 | orchestrator | ci.auto.tfvars 2026-04-07 00:02:25.855379 | orchestrator | default_custom.tf 2026-04-07 00:02:27.607402 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-07 00:02:28.122392 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-07 00:02:28.490886 | orchestrator | 2026-04-07 00:02:28.490951 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-07 00:02:28.490959 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-07 00:02:28.490964 | orchestrator | + create 2026-04-07 00:02:28.490969 | orchestrator | <= read (data resources) 2026-04-07 00:02:28.490974 | orchestrator | 2026-04-07 00:02:28.490978 | orchestrator | OpenTofu will perform the following actions: 2026-04-07 00:02:28.490982 | orchestrator | 2026-04-07 00:02:28.490986 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-07 00:02:28.491004 | orchestrator | # (config refers to values not yet known) 2026-04-07 00:02:28.491008 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-07 00:02:28.491012 | orchestrator | + checksum = (known after apply) 2026-04-07 00:02:28.491016 | orchestrator | + created_at = (known after apply) 2026-04-07 00:02:28.491021 | orchestrator | + file = (known after apply) 2026-04-07 00:02:28.491025 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491046 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491051 | orchestrator | + min_disk_gb = (known after apply) 2026-04-07 00:02:28.491055 | orchestrator | + min_ram_mb = (known after apply) 2026-04-07 00:02:28.491059 | orchestrator | + most_recent = true 2026-04-07 00:02:28.491063 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.491067 | orchestrator | + protected = (known after apply) 2026-04-07 00:02:28.491071 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491078 | orchestrator | + schema = (known after apply) 2026-04-07 00:02:28.491082 | orchestrator | + size_bytes = (known after apply) 2026-04-07 00:02:28.491086 | orchestrator | + tags = (known after apply) 2026-04-07 00:02:28.491090 | orchestrator | + updated_at = (known after apply) 2026-04-07 00:02:28.491094 | orchestrator | } 2026-04-07 00:02:28.491098 | orchestrator | 2026-04-07 00:02:28.491102 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-07 00:02:28.491106 | orchestrator | # (config refers to values not yet known) 2026-04-07 00:02:28.491110 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-07 00:02:28.491114 | orchestrator | + checksum = (known after apply) 2026-04-07 00:02:28.491118 | orchestrator | + created_at = (known after apply) 2026-04-07 00:02:28.491122 | orchestrator | + file = (known after apply) 2026-04-07 00:02:28.491126 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491129 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491133 | orchestrator | + min_disk_gb = (known after apply) 2026-04-07 00:02:28.491137 | orchestrator | + min_ram_mb = (known after apply) 2026-04-07 00:02:28.491141 | orchestrator | + most_recent = true 2026-04-07 00:02:28.491145 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.491149 | orchestrator | + protected = (known after apply) 2026-04-07 00:02:28.491152 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491156 | orchestrator | + schema = (known after apply) 2026-04-07 00:02:28.491160 | orchestrator | + size_bytes = (known after apply) 2026-04-07 00:02:28.491164 | orchestrator | + tags = (known after apply) 2026-04-07 00:02:28.491168 | orchestrator | + updated_at = (known after apply) 2026-04-07 00:02:28.491171 | orchestrator | } 2026-04-07 00:02:28.491175 | orchestrator | 2026-04-07 00:02:28.491179 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-07 00:02:28.491183 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-07 00:02:28.491187 | orchestrator | + content = (known after apply) 2026-04-07 00:02:28.491191 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 00:02:28.491195 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 00:02:28.491199 | orchestrator | + content_md5 = (known after apply) 2026-04-07 00:02:28.491203 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 00:02:28.491206 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 00:02:28.491210 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 00:02:28.491214 | orchestrator | + directory_permission = "0777" 2026-04-07 00:02:28.491218 | orchestrator | + file_permission = "0644" 2026-04-07 00:02:28.491222 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-07 00:02:28.491225 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491229 | orchestrator | } 2026-04-07 00:02:28.491233 | orchestrator | 2026-04-07 00:02:28.491237 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-07 00:02:28.491241 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-07 00:02:28.491244 | orchestrator | + content = (known after apply) 2026-04-07 00:02:28.491248 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 00:02:28.491252 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 00:02:28.491256 | orchestrator | + content_md5 = (known after apply) 2026-04-07 00:02:28.491260 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 00:02:28.491263 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 00:02:28.491267 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 00:02:28.491271 | orchestrator | + directory_permission = "0777" 2026-04-07 00:02:28.491275 | orchestrator | + file_permission = "0644" 2026-04-07 00:02:28.491282 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-07 00:02:28.491286 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491290 | orchestrator | } 2026-04-07 00:02:28.491294 | orchestrator | 2026-04-07 00:02:28.491305 | orchestrator | # local_file.inventory will be created 2026-04-07 00:02:28.491309 | orchestrator | + resource "local_file" "inventory" { 2026-04-07 00:02:28.491313 | orchestrator | + content = (known after apply) 2026-04-07 00:02:28.491317 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 00:02:28.491320 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 00:02:28.491324 | orchestrator | + content_md5 = (known after apply) 2026-04-07 00:02:28.491328 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 00:02:28.491332 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 00:02:28.491336 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 00:02:28.491340 | orchestrator | + directory_permission = "0777" 2026-04-07 00:02:28.491344 | orchestrator | + file_permission = "0644" 2026-04-07 00:02:28.491347 | orchestrator | + filename = "inventory.ci" 2026-04-07 00:02:28.491351 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491355 | orchestrator | } 2026-04-07 00:02:28.491359 | orchestrator | 2026-04-07 00:02:28.491363 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-07 00:02:28.491366 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-07 00:02:28.491370 | orchestrator | + content = (sensitive value) 2026-04-07 00:02:28.491374 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 00:02:28.491378 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 00:02:28.491382 | orchestrator | + content_md5 = (known after apply) 2026-04-07 00:02:28.491385 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 00:02:28.491389 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 00:02:28.491402 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 00:02:28.491406 | orchestrator | + directory_permission = "0700" 2026-04-07 00:02:28.491410 | orchestrator | + file_permission = "0600" 2026-04-07 00:02:28.491414 | orchestrator | + filename = ".id_rsa.ci" 2026-04-07 00:02:28.491418 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491421 | orchestrator | } 2026-04-07 00:02:28.491425 | orchestrator | 2026-04-07 00:02:28.491429 | orchestrator | # null_resource.node_semaphore will be created 2026-04-07 00:02:28.491433 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-07 00:02:28.491437 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491441 | orchestrator | } 2026-04-07 00:02:28.491444 | orchestrator | 2026-04-07 00:02:28.491448 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-07 00:02:28.491452 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-07 00:02:28.491456 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491460 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491464 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491467 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491471 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491475 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-07 00:02:28.491479 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491483 | orchestrator | + size = 80 2026-04-07 00:02:28.491486 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491490 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491494 | orchestrator | } 2026-04-07 00:02:28.491498 | orchestrator | 2026-04-07 00:02:28.491501 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-07 00:02:28.491505 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491509 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491513 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491517 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491524 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491527 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491531 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-07 00:02:28.491535 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491539 | orchestrator | + size = 80 2026-04-07 00:02:28.491543 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491546 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491550 | orchestrator | } 2026-04-07 00:02:28.491554 | orchestrator | 2026-04-07 00:02:28.491558 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-07 00:02:28.491562 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491565 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491569 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491573 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491617 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491622 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491626 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-07 00:02:28.491629 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491633 | orchestrator | + size = 80 2026-04-07 00:02:28.491637 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491641 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491645 | orchestrator | } 2026-04-07 00:02:28.491649 | orchestrator | 2026-04-07 00:02:28.491652 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-07 00:02:28.491656 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491660 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491664 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491668 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491672 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491675 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491679 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-07 00:02:28.491683 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491687 | orchestrator | + size = 80 2026-04-07 00:02:28.491691 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491694 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491698 | orchestrator | } 2026-04-07 00:02:28.491702 | orchestrator | 2026-04-07 00:02:28.491706 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-07 00:02:28.491710 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491713 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491717 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491721 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491725 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491729 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491736 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-07 00:02:28.491740 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491744 | orchestrator | + size = 80 2026-04-07 00:02:28.491748 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491752 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491755 | orchestrator | } 2026-04-07 00:02:28.491759 | orchestrator | 2026-04-07 00:02:28.491763 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-07 00:02:28.491767 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491771 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491775 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491779 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491786 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491789 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491793 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-07 00:02:28.491797 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491801 | orchestrator | + size = 80 2026-04-07 00:02:28.491805 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491808 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491812 | orchestrator | } 2026-04-07 00:02:28.491816 | orchestrator | 2026-04-07 00:02:28.491820 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-07 00:02:28.491827 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 00:02:28.491831 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491835 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491839 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491842 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.491846 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491850 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-07 00:02:28.491854 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491858 | orchestrator | + size = 80 2026-04-07 00:02:28.491862 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491865 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491869 | orchestrator | } 2026-04-07 00:02:28.491873 | orchestrator | 2026-04-07 00:02:28.491877 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-07 00:02:28.491881 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.491885 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491889 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491892 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491896 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491900 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-07 00:02:28.491904 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491908 | orchestrator | + size = 20 2026-04-07 00:02:28.491912 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491916 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491920 | orchestrator | } 2026-04-07 00:02:28.491923 | orchestrator | 2026-04-07 00:02:28.491927 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-07 00:02:28.491931 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.491935 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.491939 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.491943 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.491947 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.491950 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-07 00:02:28.491954 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.491958 | orchestrator | + size = 20 2026-04-07 00:02:28.491962 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.491966 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.491970 | orchestrator | } 2026-04-07 00:02:28.491973 | orchestrator | 2026-04-07 00:02:28.491977 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-07 00:02:28.491981 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.491985 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492021 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492025 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492029 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492033 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-07 00:02:28.492037 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492044 | orchestrator | + size = 20 2026-04-07 00:02:28.492048 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492052 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492056 | orchestrator | } 2026-04-07 00:02:28.492060 | orchestrator | 2026-04-07 00:02:28.492063 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-07 00:02:28.492067 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492071 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492075 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492079 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492082 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492086 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-07 00:02:28.492090 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492094 | orchestrator | + size = 20 2026-04-07 00:02:28.492097 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492101 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492105 | orchestrator | } 2026-04-07 00:02:28.492109 | orchestrator | 2026-04-07 00:02:28.492113 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-07 00:02:28.492116 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492120 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492124 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492128 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492131 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492135 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-07 00:02:28.492139 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492145 | orchestrator | + size = 20 2026-04-07 00:02:28.492149 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492153 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492157 | orchestrator | } 2026-04-07 00:02:28.492161 | orchestrator | 2026-04-07 00:02:28.492164 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-07 00:02:28.492168 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492172 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492176 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492180 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492183 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492187 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-07 00:02:28.492191 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492195 | orchestrator | + size = 20 2026-04-07 00:02:28.492198 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492202 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492206 | orchestrator | } 2026-04-07 00:02:28.492210 | orchestrator | 2026-04-07 00:02:28.492214 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-07 00:02:28.492217 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492221 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492225 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492229 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492235 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492239 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-07 00:02:28.492243 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492246 | orchestrator | + size = 20 2026-04-07 00:02:28.492250 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492254 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492258 | orchestrator | } 2026-04-07 00:02:28.492262 | orchestrator | 2026-04-07 00:02:28.492265 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-07 00:02:28.492269 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492277 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492281 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492285 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492288 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492292 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-07 00:02:28.492296 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492300 | orchestrator | + size = 20 2026-04-07 00:02:28.492304 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492307 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492311 | orchestrator | } 2026-04-07 00:02:28.492315 | orchestrator | 2026-04-07 00:02:28.492319 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-07 00:02:28.492323 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 00:02:28.492327 | orchestrator | + attachment = (known after apply) 2026-04-07 00:02:28.492330 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492334 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492338 | orchestrator | + metadata = (known after apply) 2026-04-07 00:02:28.492342 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-07 00:02:28.492346 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492349 | orchestrator | + size = 20 2026-04-07 00:02:28.492353 | orchestrator | + volume_retype_policy = "never" 2026-04-07 00:02:28.492357 | orchestrator | + volume_type = "ssd" 2026-04-07 00:02:28.492361 | orchestrator | } 2026-04-07 00:02:28.492364 | orchestrator | 2026-04-07 00:02:28.492368 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-07 00:02:28.492372 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-07 00:02:28.492376 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.492380 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.492383 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.492387 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.492391 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492395 | orchestrator | + config_drive = true 2026-04-07 00:02:28.492399 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.492402 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.492406 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-07 00:02:28.492410 | orchestrator | + force_delete = false 2026-04-07 00:02:28.492414 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.492417 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492421 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.492425 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.492429 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.492433 | orchestrator | + name = "testbed-manager" 2026-04-07 00:02:28.492436 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.492440 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492444 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.492448 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.492451 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.492455 | orchestrator | + user_data = (sensitive value) 2026-04-07 00:02:28.492459 | orchestrator | 2026-04-07 00:02:28.492463 | orchestrator | + block_device { 2026-04-07 00:02:28.492467 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.492471 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.492477 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.492481 | orchestrator | + multiattach = false 2026-04-07 00:02:28.492484 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.492488 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492496 | orchestrator | } 2026-04-07 00:02:28.492500 | orchestrator | 2026-04-07 00:02:28.492504 | orchestrator | + network { 2026-04-07 00:02:28.492508 | orchestrator | + access_network = false 2026-04-07 00:02:28.492512 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.492515 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.492519 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.492523 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.492527 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.492531 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492534 | orchestrator | } 2026-04-07 00:02:28.492538 | orchestrator | } 2026-04-07 00:02:28.492542 | orchestrator | 2026-04-07 00:02:28.492546 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-07 00:02:28.492550 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.492553 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.492557 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.492561 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.492565 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.492568 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492572 | orchestrator | + config_drive = true 2026-04-07 00:02:28.492576 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.492580 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.492583 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.492587 | orchestrator | + force_delete = false 2026-04-07 00:02:28.492591 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.492595 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492599 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.492602 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.492606 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.492610 | orchestrator | + name = "testbed-node-0" 2026-04-07 00:02:28.492614 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.492620 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492624 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.492627 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.492631 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.492635 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.492639 | orchestrator | 2026-04-07 00:02:28.492643 | orchestrator | + block_device { 2026-04-07 00:02:28.492647 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.492650 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.492654 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.492658 | orchestrator | + multiattach = false 2026-04-07 00:02:28.492662 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.492665 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492669 | orchestrator | } 2026-04-07 00:02:28.492673 | orchestrator | 2026-04-07 00:02:28.492677 | orchestrator | + network { 2026-04-07 00:02:28.492681 | orchestrator | + access_network = false 2026-04-07 00:02:28.492685 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.492688 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.492692 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.492696 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.492700 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.492703 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492707 | orchestrator | } 2026-04-07 00:02:28.492711 | orchestrator | } 2026-04-07 00:02:28.492715 | orchestrator | 2026-04-07 00:02:28.492719 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-07 00:02:28.492722 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.492726 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.492733 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.492737 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.492741 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.492744 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492748 | orchestrator | + config_drive = true 2026-04-07 00:02:28.492752 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.492756 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.492760 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.492763 | orchestrator | + force_delete = false 2026-04-07 00:02:28.492767 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.492771 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492775 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.492778 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.492782 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.492786 | orchestrator | + name = "testbed-node-1" 2026-04-07 00:02:28.492790 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.492794 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492797 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.492801 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.492805 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.492809 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.492813 | orchestrator | 2026-04-07 00:02:28.492816 | orchestrator | + block_device { 2026-04-07 00:02:28.492820 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.492824 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.492828 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.492832 | orchestrator | + multiattach = false 2026-04-07 00:02:28.492835 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.492839 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492843 | orchestrator | } 2026-04-07 00:02:28.492847 | orchestrator | 2026-04-07 00:02:28.492850 | orchestrator | + network { 2026-04-07 00:02:28.492854 | orchestrator | + access_network = false 2026-04-07 00:02:28.492858 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.492862 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.492866 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.492869 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.492873 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.492877 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.492881 | orchestrator | } 2026-04-07 00:02:28.492884 | orchestrator | } 2026-04-07 00:02:28.492888 | orchestrator | 2026-04-07 00:02:28.492892 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-07 00:02:28.492896 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.492900 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.492903 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.492907 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.492911 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.492917 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.492921 | orchestrator | + config_drive = true 2026-04-07 00:02:28.492925 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.492929 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.492933 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.492936 | orchestrator | + force_delete = false 2026-04-07 00:02:28.492940 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.492944 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.492948 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.492955 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.492958 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.492962 | orchestrator | + name = "testbed-node-2" 2026-04-07 00:02:28.492966 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.492970 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.492973 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.492977 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.492981 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.492985 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.493000 | orchestrator | 2026-04-07 00:02:28.493004 | orchestrator | + block_device { 2026-04-07 00:02:28.493007 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.493011 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.493015 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.493021 | orchestrator | + multiattach = false 2026-04-07 00:02:28.493025 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.493028 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493032 | orchestrator | } 2026-04-07 00:02:28.493036 | orchestrator | 2026-04-07 00:02:28.493040 | orchestrator | + network { 2026-04-07 00:02:28.493043 | orchestrator | + access_network = false 2026-04-07 00:02:28.493047 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.493051 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.493054 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.493058 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.493062 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.493066 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493069 | orchestrator | } 2026-04-07 00:02:28.493073 | orchestrator | } 2026-04-07 00:02:28.493077 | orchestrator | 2026-04-07 00:02:28.493081 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-07 00:02:28.493084 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.493088 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.493092 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.493096 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.493099 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.493103 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.493107 | orchestrator | + config_drive = true 2026-04-07 00:02:28.493110 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.493114 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.493118 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.493122 | orchestrator | + force_delete = false 2026-04-07 00:02:28.493125 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.493129 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493133 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.493136 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.493140 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.493144 | orchestrator | + name = "testbed-node-3" 2026-04-07 00:02:28.493148 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.493151 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493155 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.493159 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.493162 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.493166 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.493170 | orchestrator | 2026-04-07 00:02:28.493174 | orchestrator | + block_device { 2026-04-07 00:02:28.493180 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.493184 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.493187 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.493194 | orchestrator | + multiattach = false 2026-04-07 00:02:28.493198 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.493202 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493206 | orchestrator | } 2026-04-07 00:02:28.493209 | orchestrator | 2026-04-07 00:02:28.493213 | orchestrator | + network { 2026-04-07 00:02:28.493217 | orchestrator | + access_network = false 2026-04-07 00:02:28.493221 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.493224 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.493228 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.493232 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.493236 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.493239 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493243 | orchestrator | } 2026-04-07 00:02:28.493247 | orchestrator | } 2026-04-07 00:02:28.493251 | orchestrator | 2026-04-07 00:02:28.493254 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-07 00:02:28.493258 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.493262 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.493266 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.493270 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.493273 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.493277 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.493281 | orchestrator | + config_drive = true 2026-04-07 00:02:28.493284 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.493288 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.493292 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.493296 | orchestrator | + force_delete = false 2026-04-07 00:02:28.493299 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.493303 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493307 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.493311 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.493314 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.493318 | orchestrator | + name = "testbed-node-4" 2026-04-07 00:02:28.493322 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.493325 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493329 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.493333 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.493337 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.493340 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.493344 | orchestrator | 2026-04-07 00:02:28.493348 | orchestrator | + block_device { 2026-04-07 00:02:28.493352 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.493356 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.493359 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.493363 | orchestrator | + multiattach = false 2026-04-07 00:02:28.493367 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.493371 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493374 | orchestrator | } 2026-04-07 00:02:28.493378 | orchestrator | 2026-04-07 00:02:28.493382 | orchestrator | + network { 2026-04-07 00:02:28.493386 | orchestrator | + access_network = false 2026-04-07 00:02:28.493389 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.493393 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.493397 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.493401 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.493404 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.493410 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493414 | orchestrator | } 2026-04-07 00:02:28.493418 | orchestrator | } 2026-04-07 00:02:28.493434 | orchestrator | 2026-04-07 00:02:28.493438 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-07 00:02:28.493442 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 00:02:28.493445 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 00:02:28.493449 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 00:02:28.493453 | orchestrator | + all_metadata = (known after apply) 2026-04-07 00:02:28.493457 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.493460 | orchestrator | + availability_zone = "nova" 2026-04-07 00:02:28.493464 | orchestrator | + config_drive = true 2026-04-07 00:02:28.493468 | orchestrator | + created = (known after apply) 2026-04-07 00:02:28.493471 | orchestrator | + flavor_id = (known after apply) 2026-04-07 00:02:28.493475 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 00:02:28.493479 | orchestrator | + force_delete = false 2026-04-07 00:02:28.493486 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 00:02:28.493490 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493494 | orchestrator | + image_id = (known after apply) 2026-04-07 00:02:28.493498 | orchestrator | + image_name = (known after apply) 2026-04-07 00:02:28.493502 | orchestrator | + key_pair = "testbed" 2026-04-07 00:02:28.493505 | orchestrator | + name = "testbed-node-5" 2026-04-07 00:02:28.493509 | orchestrator | + power_state = "active" 2026-04-07 00:02:28.493513 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493516 | orchestrator | + security_groups = (known after apply) 2026-04-07 00:02:28.493520 | orchestrator | + stop_before_destroy = false 2026-04-07 00:02:28.493524 | orchestrator | + updated = (known after apply) 2026-04-07 00:02:28.493528 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 00:02:28.493531 | orchestrator | 2026-04-07 00:02:28.493535 | orchestrator | + block_device { 2026-04-07 00:02:28.493539 | orchestrator | + boot_index = 0 2026-04-07 00:02:28.493543 | orchestrator | + delete_on_termination = false 2026-04-07 00:02:28.493546 | orchestrator | + destination_type = "volume" 2026-04-07 00:02:28.493550 | orchestrator | + multiattach = false 2026-04-07 00:02:28.493554 | orchestrator | + source_type = "volume" 2026-04-07 00:02:28.493558 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493561 | orchestrator | } 2026-04-07 00:02:28.493565 | orchestrator | 2026-04-07 00:02:28.493569 | orchestrator | + network { 2026-04-07 00:02:28.493573 | orchestrator | + access_network = false 2026-04-07 00:02:28.493576 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 00:02:28.493580 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 00:02:28.493584 | orchestrator | + mac = (known after apply) 2026-04-07 00:02:28.493588 | orchestrator | + name = (known after apply) 2026-04-07 00:02:28.493591 | orchestrator | + port = (known after apply) 2026-04-07 00:02:28.493595 | orchestrator | + uuid = (known after apply) 2026-04-07 00:02:28.493599 | orchestrator | } 2026-04-07 00:02:28.493603 | orchestrator | } 2026-04-07 00:02:28.493606 | orchestrator | 2026-04-07 00:02:28.493610 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-07 00:02:28.493614 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-07 00:02:28.493618 | orchestrator | + fingerprint = (known after apply) 2026-04-07 00:02:28.493621 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493625 | orchestrator | + name = "testbed" 2026-04-07 00:02:28.493629 | orchestrator | + private_key = (sensitive value) 2026-04-07 00:02:28.493633 | orchestrator | + public_key = (known after apply) 2026-04-07 00:02:28.493636 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493640 | orchestrator | + user_id = (known after apply) 2026-04-07 00:02:28.493644 | orchestrator | } 2026-04-07 00:02:28.493648 | orchestrator | 2026-04-07 00:02:28.493652 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-07 00:02:28.493656 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493663 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493667 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493670 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493674 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493678 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493682 | orchestrator | } 2026-04-07 00:02:28.493685 | orchestrator | 2026-04-07 00:02:28.493689 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-07 00:02:28.493693 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493697 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493701 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493704 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493708 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493712 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493715 | orchestrator | } 2026-04-07 00:02:28.493719 | orchestrator | 2026-04-07 00:02:28.493723 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-07 00:02:28.493727 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493731 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493734 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493738 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493742 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493746 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493749 | orchestrator | } 2026-04-07 00:02:28.493753 | orchestrator | 2026-04-07 00:02:28.493757 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-07 00:02:28.493761 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493764 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493768 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493772 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493776 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493779 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493783 | orchestrator | } 2026-04-07 00:02:28.493787 | orchestrator | 2026-04-07 00:02:28.493791 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-07 00:02:28.493794 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493798 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493802 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493806 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493812 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493818 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493822 | orchestrator | } 2026-04-07 00:02:28.493826 | orchestrator | 2026-04-07 00:02:28.493829 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-07 00:02:28.493833 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493837 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493841 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493844 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493848 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493852 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493856 | orchestrator | } 2026-04-07 00:02:28.493859 | orchestrator | 2026-04-07 00:02:28.493863 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-07 00:02:28.493867 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493871 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493875 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493878 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493882 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493889 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493893 | orchestrator | } 2026-04-07 00:02:28.493896 | orchestrator | 2026-04-07 00:02:28.493900 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-07 00:02:28.493904 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493908 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493912 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493915 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493919 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493923 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493926 | orchestrator | } 2026-04-07 00:02:28.493930 | orchestrator | 2026-04-07 00:02:28.493934 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-07 00:02:28.493938 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 00:02:28.493942 | orchestrator | + device = (known after apply) 2026-04-07 00:02:28.493945 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.493949 | orchestrator | + instance_id = (known after apply) 2026-04-07 00:02:28.493953 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.493957 | orchestrator | + volume_id = (known after apply) 2026-04-07 00:02:28.493960 | orchestrator | } 2026-04-07 00:02:28.493964 | orchestrator | 2026-04-07 00:02:28.493968 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-07 00:02:28.493973 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-07 00:02:28.493977 | orchestrator | + fixed_ip = (known after apply) 2026-04-07 00:02:28.493980 | orchestrator | + floating_ip = (known after apply) 2026-04-07 00:02:28.493984 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494075 | orchestrator | + port_id = (known after apply) 2026-04-07 00:02:28.494080 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494084 | orchestrator | } 2026-04-07 00:02:28.494088 | orchestrator | 2026-04-07 00:02:28.494092 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-07 00:02:28.494096 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-07 00:02:28.494100 | orchestrator | + address = (known after apply) 2026-04-07 00:02:28.494104 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494108 | orchestrator | + dns_domain = (known after apply) 2026-04-07 00:02:28.494112 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494116 | orchestrator | + fixed_ip = (known after apply) 2026-04-07 00:02:28.494119 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494123 | orchestrator | + pool = "public" 2026-04-07 00:02:28.494127 | orchestrator | + port_id = (known after apply) 2026-04-07 00:02:28.494131 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494135 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494139 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494143 | orchestrator | } 2026-04-07 00:02:28.494147 | orchestrator | 2026-04-07 00:02:28.494151 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-07 00:02:28.494155 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-07 00:02:28.494159 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494162 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494166 | orchestrator | + availability_zone_hints = [ 2026-04-07 00:02:28.494170 | orchestrator | + "nova", 2026-04-07 00:02:28.494174 | orchestrator | ] 2026-04-07 00:02:28.494178 | orchestrator | + dns_domain = (known after apply) 2026-04-07 00:02:28.494182 | orchestrator | + external = (known after apply) 2026-04-07 00:02:28.494186 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494190 | orchestrator | + mtu = (known after apply) 2026-04-07 00:02:28.494194 | orchestrator | + name = "net-testbed-management" 2026-04-07 00:02:28.494198 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494205 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494209 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494213 | orchestrator | + shared = (known after apply) 2026-04-07 00:02:28.494216 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494220 | orchestrator | + transparent_vlan = (known after apply) 2026-04-07 00:02:28.494224 | orchestrator | 2026-04-07 00:02:28.494228 | orchestrator | + segments (known after apply) 2026-04-07 00:02:28.494232 | orchestrator | } 2026-04-07 00:02:28.494236 | orchestrator | 2026-04-07 00:02:28.494240 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-07 00:02:28.494243 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-07 00:02:28.494247 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494251 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494255 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494261 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494265 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494269 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494273 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494277 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494286 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494290 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.494294 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.494297 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494301 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494305 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494309 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.494313 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494316 | orchestrator | 2026-04-07 00:02:28.494320 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494324 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.494328 | orchestrator | } 2026-04-07 00:02:28.494332 | orchestrator | 2026-04-07 00:02:28.494336 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.494340 | orchestrator | 2026-04-07 00:02:28.494344 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.494347 | orchestrator | + ip_address = "192.168.16.5" 2026-04-07 00:02:28.494351 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494355 | orchestrator | } 2026-04-07 00:02:28.494359 | orchestrator | } 2026-04-07 00:02:28.494363 | orchestrator | 2026-04-07 00:02:28.494367 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-07 00:02:28.494370 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.494374 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494378 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494382 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494386 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494390 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494394 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494397 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494401 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494405 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494409 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.494413 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.494416 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494420 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494424 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494431 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.494435 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494439 | orchestrator | 2026-04-07 00:02:28.494442 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494446 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.494450 | orchestrator | } 2026-04-07 00:02:28.494454 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494458 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.494462 | orchestrator | } 2026-04-07 00:02:28.494466 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494469 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.494473 | orchestrator | } 2026-04-07 00:02:28.494477 | orchestrator | 2026-04-07 00:02:28.494481 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.494485 | orchestrator | 2026-04-07 00:02:28.494489 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.494493 | orchestrator | + ip_address = "192.168.16.10" 2026-04-07 00:02:28.494496 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494500 | orchestrator | } 2026-04-07 00:02:28.494504 | orchestrator | } 2026-04-07 00:02:28.494508 | orchestrator | 2026-04-07 00:02:28.494512 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-07 00:02:28.494516 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.494519 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494523 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494527 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494531 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494535 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494539 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494542 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494546 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494550 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494554 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.494558 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.494561 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494565 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494569 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494573 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.494577 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494581 | orchestrator | 2026-04-07 00:02:28.494584 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494588 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.494592 | orchestrator | } 2026-04-07 00:02:28.494596 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494600 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.494604 | orchestrator | } 2026-04-07 00:02:28.494608 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494611 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.494615 | orchestrator | } 2026-04-07 00:02:28.494619 | orchestrator | 2026-04-07 00:02:28.494623 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.494627 | orchestrator | 2026-04-07 00:02:28.494631 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.494634 | orchestrator | + ip_address = "192.168.16.11" 2026-04-07 00:02:28.494638 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494642 | orchestrator | } 2026-04-07 00:02:28.494646 | orchestrator | } 2026-04-07 00:02:28.494650 | orchestrator | 2026-04-07 00:02:28.494654 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-07 00:02:28.494657 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.494661 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494665 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494669 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494673 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494680 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494684 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494688 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494691 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494698 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494704 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.494708 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.494712 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494716 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494719 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494723 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.494727 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494731 | orchestrator | 2026-04-07 00:02:28.494735 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494738 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.494742 | orchestrator | } 2026-04-07 00:02:28.494746 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494750 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.494754 | orchestrator | } 2026-04-07 00:02:28.494758 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494761 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.494765 | orchestrator | } 2026-04-07 00:02:28.494769 | orchestrator | 2026-04-07 00:02:28.494773 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.494777 | orchestrator | 2026-04-07 00:02:28.494781 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.494784 | orchestrator | + ip_address = "192.168.16.12" 2026-04-07 00:02:28.494788 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494792 | orchestrator | } 2026-04-07 00:02:28.494796 | orchestrator | } 2026-04-07 00:02:28.494800 | orchestrator | 2026-04-07 00:02:28.494804 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-07 00:02:28.494808 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.494811 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494815 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494819 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494823 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494827 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494831 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494834 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494838 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494842 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.494846 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.494850 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.494853 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.494857 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.494861 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.494865 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.494869 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.494873 | orchestrator | 2026-04-07 00:02:28.494876 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494880 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.494884 | orchestrator | } 2026-04-07 00:02:28.494888 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494892 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.494896 | orchestrator | } 2026-04-07 00:02:28.494899 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.494903 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.494907 | orchestrator | } 2026-04-07 00:02:28.494911 | orchestrator | 2026-04-07 00:02:28.494918 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.494922 | orchestrator | 2026-04-07 00:02:28.494925 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.494929 | orchestrator | + ip_address = "192.168.16.13" 2026-04-07 00:02:28.494933 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.494937 | orchestrator | } 2026-04-07 00:02:28.494941 | orchestrator | } 2026-04-07 00:02:28.494945 | orchestrator | 2026-04-07 00:02:28.494948 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-07 00:02:28.494952 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.494956 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.494960 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.494964 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.494968 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.494971 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.494975 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.494979 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.494983 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.494987 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495001 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.495005 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.495008 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.495012 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.495016 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495020 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.495024 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495028 | orchestrator | 2026-04-07 00:02:28.495031 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495035 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.495039 | orchestrator | } 2026-04-07 00:02:28.495043 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495047 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.495050 | orchestrator | } 2026-04-07 00:02:28.495054 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495058 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.495062 | orchestrator | } 2026-04-07 00:02:28.495066 | orchestrator | 2026-04-07 00:02:28.495069 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.495073 | orchestrator | 2026-04-07 00:02:28.495077 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.495081 | orchestrator | + ip_address = "192.168.16.14" 2026-04-07 00:02:28.495085 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.495088 | orchestrator | } 2026-04-07 00:02:28.495092 | orchestrator | } 2026-04-07 00:02:28.495096 | orchestrator | 2026-04-07 00:02:28.495100 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-07 00:02:28.495104 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 00:02:28.495108 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.495111 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 00:02:28.495115 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 00:02:28.495119 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.495123 | orchestrator | + device_id = (known after apply) 2026-04-07 00:02:28.495126 | orchestrator | + device_owner = (known after apply) 2026-04-07 00:02:28.495133 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 00:02:28.495136 | orchestrator | + dns_name = (known after apply) 2026-04-07 00:02:28.495140 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495144 | orchestrator | + mac_address = (known after apply) 2026-04-07 00:02:28.495148 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.495151 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 00:02:28.495155 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 00:02:28.495162 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495166 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 00:02:28.495170 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495174 | orchestrator | 2026-04-07 00:02:28.495178 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495181 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 00:02:28.495185 | orchestrator | } 2026-04-07 00:02:28.495189 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495193 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 00:02:28.495196 | orchestrator | } 2026-04-07 00:02:28.495200 | orchestrator | + allowed_address_pairs { 2026-04-07 00:02:28.495204 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 00:02:28.495208 | orchestrator | } 2026-04-07 00:02:28.495212 | orchestrator | 2026-04-07 00:02:28.495218 | orchestrator | + binding (known after apply) 2026-04-07 00:02:28.495222 | orchestrator | 2026-04-07 00:02:28.495225 | orchestrator | + fixed_ip { 2026-04-07 00:02:28.495229 | orchestrator | + ip_address = "192.168.16.15" 2026-04-07 00:02:28.495233 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.495237 | orchestrator | } 2026-04-07 00:02:28.495241 | orchestrator | } 2026-04-07 00:02:28.495245 | orchestrator | 2026-04-07 00:02:28.495248 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-07 00:02:28.495252 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-07 00:02:28.495256 | orchestrator | + force_destroy = false 2026-04-07 00:02:28.495260 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495264 | orchestrator | + port_id = (known after apply) 2026-04-07 00:02:28.495268 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495271 | orchestrator | + router_id = (known after apply) 2026-04-07 00:02:28.495275 | orchestrator | + subnet_id = (known after apply) 2026-04-07 00:02:28.495279 | orchestrator | } 2026-04-07 00:02:28.495283 | orchestrator | 2026-04-07 00:02:28.495287 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-07 00:02:28.495291 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-07 00:02:28.495294 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 00:02:28.495298 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.495302 | orchestrator | + availability_zone_hints = [ 2026-04-07 00:02:28.495306 | orchestrator | + "nova", 2026-04-07 00:02:28.495310 | orchestrator | ] 2026-04-07 00:02:28.495314 | orchestrator | + distributed = (known after apply) 2026-04-07 00:02:28.495317 | orchestrator | + enable_snat = (known after apply) 2026-04-07 00:02:28.495321 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-07 00:02:28.495325 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-07 00:02:28.495329 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495333 | orchestrator | + name = "testbed" 2026-04-07 00:02:28.495337 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495340 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495344 | orchestrator | 2026-04-07 00:02:28.495348 | orchestrator | + external_fixed_ip (known after apply) 2026-04-07 00:02:28.495352 | orchestrator | } 2026-04-07 00:02:28.495356 | orchestrator | 2026-04-07 00:02:28.495360 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-07 00:02:28.495364 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-07 00:02:28.495368 | orchestrator | + description = "ssh" 2026-04-07 00:02:28.495372 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495376 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495380 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495384 | orchestrator | + port_range_max = 22 2026-04-07 00:02:28.495387 | orchestrator | + port_range_min = 22 2026-04-07 00:02:28.495391 | orchestrator | + protocol = "tcp" 2026-04-07 00:02:28.495395 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495402 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495405 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495409 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495413 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495417 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495421 | orchestrator | } 2026-04-07 00:02:28.495425 | orchestrator | 2026-04-07 00:02:28.495429 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-07 00:02:28.495433 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-07 00:02:28.495436 | orchestrator | + description = "wireguard" 2026-04-07 00:02:28.495440 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495444 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495448 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495452 | orchestrator | + port_range_max = 51820 2026-04-07 00:02:28.495455 | orchestrator | + port_range_min = 51820 2026-04-07 00:02:28.495459 | orchestrator | + protocol = "udp" 2026-04-07 00:02:28.495463 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495467 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495471 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495474 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495478 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495482 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495486 | orchestrator | } 2026-04-07 00:02:28.495490 | orchestrator | 2026-04-07 00:02:28.495494 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-07 00:02:28.495497 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-07 00:02:28.495501 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495505 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495509 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495513 | orchestrator | + protocol = "tcp" 2026-04-07 00:02:28.495519 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495523 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495527 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495531 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-07 00:02:28.495535 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495539 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495542 | orchestrator | } 2026-04-07 00:02:28.495546 | orchestrator | 2026-04-07 00:02:28.495550 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-07 00:02:28.495554 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-07 00:02:28.495558 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495562 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495566 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495569 | orchestrator | + protocol = "udp" 2026-04-07 00:02:28.495573 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495577 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495581 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495585 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-07 00:02:28.495588 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495592 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495596 | orchestrator | } 2026-04-07 00:02:28.495600 | orchestrator | 2026-04-07 00:02:28.495604 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-07 00:02:28.495611 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-07 00:02:28.495615 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495619 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495623 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495626 | orchestrator | + protocol = "icmp" 2026-04-07 00:02:28.495630 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495634 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495638 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495642 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495646 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495649 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495653 | orchestrator | } 2026-04-07 00:02:28.495657 | orchestrator | 2026-04-07 00:02:28.495661 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-07 00:02:28.495665 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-07 00:02:28.495669 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495673 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495676 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495680 | orchestrator | + protocol = "tcp" 2026-04-07 00:02:28.495684 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495688 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495694 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495698 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495701 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495705 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495709 | orchestrator | } 2026-04-07 00:02:28.495713 | orchestrator | 2026-04-07 00:02:28.495717 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-07 00:02:28.495721 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-07 00:02:28.495724 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495728 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495732 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495736 | orchestrator | + protocol = "udp" 2026-04-07 00:02:28.495740 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495744 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495747 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495751 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495755 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495759 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495763 | orchestrator | } 2026-04-07 00:02:28.495767 | orchestrator | 2026-04-07 00:02:28.495771 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-07 00:02:28.495774 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-07 00:02:28.495778 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495786 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495789 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495793 | orchestrator | + protocol = "icmp" 2026-04-07 00:02:28.495797 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495801 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495805 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495808 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495812 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495816 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495823 | orchestrator | } 2026-04-07 00:02:28.495827 | orchestrator | 2026-04-07 00:02:28.495831 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-07 00:02:28.495835 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-07 00:02:28.495839 | orchestrator | + description = "vrrp" 2026-04-07 00:02:28.495843 | orchestrator | + direction = "ingress" 2026-04-07 00:02:28.495847 | orchestrator | + ethertype = "IPv4" 2026-04-07 00:02:28.495850 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495855 | orchestrator | + protocol = "112" 2026-04-07 00:02:28.495861 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495865 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 00:02:28.495869 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 00:02:28.495873 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 00:02:28.495877 | orchestrator | + security_group_id = (known after apply) 2026-04-07 00:02:28.495882 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495886 | orchestrator | } 2026-04-07 00:02:28.495890 | orchestrator | 2026-04-07 00:02:28.495894 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-07 00:02:28.495898 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-07 00:02:28.495902 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.495906 | orchestrator | + description = "management security group" 2026-04-07 00:02:28.495911 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495915 | orchestrator | + name = "testbed-management" 2026-04-07 00:02:28.495919 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495923 | orchestrator | + stateful = (known after apply) 2026-04-07 00:02:28.495927 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495931 | orchestrator | } 2026-04-07 00:02:28.495934 | orchestrator | 2026-04-07 00:02:28.495938 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-07 00:02:28.495943 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-07 00:02:28.495947 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.495951 | orchestrator | + description = "node security group" 2026-04-07 00:02:28.495955 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.495959 | orchestrator | + name = "testbed-node" 2026-04-07 00:02:28.495963 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.495968 | orchestrator | + stateful = (known after apply) 2026-04-07 00:02:28.495972 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.495976 | orchestrator | } 2026-04-07 00:02:28.495980 | orchestrator | 2026-04-07 00:02:28.495984 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-07 00:02:28.496003 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-07 00:02:28.496007 | orchestrator | + all_tags = (known after apply) 2026-04-07 00:02:28.496011 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-07 00:02:28.496015 | orchestrator | + dns_nameservers = [ 2026-04-07 00:02:28.496020 | orchestrator | + "8.8.8.8", 2026-04-07 00:02:28.496024 | orchestrator | + "9.9.9.9", 2026-04-07 00:02:28.496028 | orchestrator | ] 2026-04-07 00:02:28.496032 | orchestrator | + enable_dhcp = true 2026-04-07 00:02:28.496036 | orchestrator | + gateway_ip = (known after apply) 2026-04-07 00:02:28.496041 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.496045 | orchestrator | + ip_version = 4 2026-04-07 00:02:28.496049 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-07 00:02:28.496053 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-07 00:02:28.496057 | orchestrator | + name = "subnet-testbed-management" 2026-04-07 00:02:28.496110 | orchestrator | + network_id = (known after apply) 2026-04-07 00:02:28.496115 | orchestrator | + no_gateway = false 2026-04-07 00:02:28.496119 | orchestrator | + region = (known after apply) 2026-04-07 00:02:28.496123 | orchestrator | + service_types = (known after apply) 2026-04-07 00:02:28.496130 | orchestrator | + tenant_id = (known after apply) 2026-04-07 00:02:28.496134 | orchestrator | 2026-04-07 00:02:28.496138 | orchestrator | + allocation_pool { 2026-04-07 00:02:28.496143 | orchestrator | + end = "192.168.31.250" 2026-04-07 00:02:28.496147 | orchestrator | + start = "192.168.31.200" 2026-04-07 00:02:28.496151 | orchestrator | } 2026-04-07 00:02:28.496156 | orchestrator | } 2026-04-07 00:02:28.496160 | orchestrator | 2026-04-07 00:02:28.496165 | orchestrator | # terraform_data.image will be created 2026-04-07 00:02:28.496169 | orchestrator | + resource "terraform_data" "image" { 2026-04-07 00:02:28.496173 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.496178 | orchestrator | + input = "Ubuntu 24.04" 2026-04-07 00:02:28.496182 | orchestrator | + output = (known after apply) 2026-04-07 00:02:28.496186 | orchestrator | } 2026-04-07 00:02:28.496190 | orchestrator | 2026-04-07 00:02:28.496195 | orchestrator | # terraform_data.image_node will be created 2026-04-07 00:02:28.496199 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-07 00:02:28.496203 | orchestrator | + id = (known after apply) 2026-04-07 00:02:28.496207 | orchestrator | + input = "Ubuntu 24.04" 2026-04-07 00:02:28.496211 | orchestrator | + output = (known after apply) 2026-04-07 00:02:28.496215 | orchestrator | } 2026-04-07 00:02:28.496219 | orchestrator | 2026-04-07 00:02:28.496223 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-07 00:02:28.496227 | orchestrator | 2026-04-07 00:02:28.496231 | orchestrator | Changes to Outputs: 2026-04-07 00:02:28.496234 | orchestrator | + manager_address = (sensitive value) 2026-04-07 00:02:28.496238 | orchestrator | + private_key = (sensitive value) 2026-04-07 00:02:28.727194 | orchestrator | terraform_data.image: Creating... 2026-04-07 00:02:28.809230 | orchestrator | terraform_data.image: Creation complete after 0s [id=966f4072-09ad-0d84-2048-f7ea36be40ab] 2026-04-07 00:02:28.811894 | orchestrator | terraform_data.image_node: Creating... 2026-04-07 00:02:28.811934 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6140d5a8-7ed7-000d-c674-95a48cba6bb6] 2026-04-07 00:02:28.842065 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-07 00:02:28.846108 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-07 00:02:28.846140 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-07 00:02:28.846147 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-07 00:02:28.852039 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-07 00:02:28.852083 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-07 00:02:28.852093 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-07 00:02:28.862069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-07 00:02:28.862101 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-07 00:02:28.869830 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-07 00:02:29.323613 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-07 00:02:29.333553 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-07 00:02:29.340725 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-07 00:02:29.343010 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-07 00:02:30.140868 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=5625491c-c7ac-4e0d-a0ce-ccf753e9c72b] 2026-04-07 00:02:30.145564 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-07 00:02:30.210514 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-07 00:02:30.221502 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-07 00:02:30.224964 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=f37302cb8c06a484b1526baaece76b9b227c88ff] 2026-04-07 00:02:30.237105 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-07 00:02:30.239404 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=dc631e06ef09c101e0c6653989219b3003d0b83c] 2026-04-07 00:02:30.247237 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-07 00:02:32.631276 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=e2189674-a553-4d5d-8fd8-5508ff437706] 2026-04-07 00:02:32.647046 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-07 00:02:32.670235 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=89661b50-0f8c-4be3-a02e-39629210b15c] 2026-04-07 00:02:32.679746 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-07 00:02:32.690675 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=51e4949c-955e-4de9-a772-15b9aebb09fe] 2026-04-07 00:02:32.695743 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-07 00:02:32.777168 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=c3ad8b00-5bc8-428f-af67-6bd1265a9b39] 2026-04-07 00:02:32.786727 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-07 00:02:32.795572 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=fa777649-5680-4322-b615-3bf8b4a5ab2e] 2026-04-07 00:02:32.802184 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-07 00:02:32.819893 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=01ab1f04-e59c-4d36-99ed-1bd22a22bd9d] 2026-04-07 00:02:32.827528 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-07 00:02:32.877563 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=3172f6cd-16a6-47ae-9a74-28bff05f52e4] 2026-04-07 00:02:32.886253 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-07 00:02:32.928891 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=fad897de-4fc3-471c-b210-14b98141fe30] 2026-04-07 00:02:32.938942 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=55495174-9adc-4a3f-978b-4142e2213b73] 2026-04-07 00:02:33.781514 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=63f89094-a177-4f34-9706-4c412ab91d72] 2026-04-07 00:02:33.787242 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=31c641c7-9251-4f54-b7a9-0c76fe235f8d] 2026-04-07 00:02:33.797794 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-07 00:02:36.063191 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=92e9469d-beee-4970-a2a1-38a209111f07] 2026-04-07 00:02:36.345446 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=a3bddeda-068f-4606-ac9b-bb011ef193ff] 2026-04-07 00:02:36.454870 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=3df5c3d7-f562-4b98-85e9-985d74ba8432] 2026-04-07 00:02:36.591828 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=bd517331-9c52-419f-93b8-9167504f17a1] 2026-04-07 00:02:36.623210 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=9e234e93-4956-44de-aa9e-0c10a0121988] 2026-04-07 00:02:36.694976 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=3f9b0d06-b73e-47a0-92e3-5afdd1ae564a] 2026-04-07 00:02:37.585207 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=85ded10e-e860-4b87-9c86-e8255c932bee] 2026-04-07 00:02:37.594737 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-07 00:02:37.597303 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-07 00:02:37.598943 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-07 00:02:37.883842 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b09724c4-ec1f-4d54-8f6d-0b6f5d4c952f] 2026-04-07 00:02:37.897312 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-07 00:02:37.898065 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-07 00:02:37.898350 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-07 00:02:37.899225 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-07 00:02:37.900115 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-07 00:02:37.903230 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-07 00:02:37.907154 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-07 00:02:37.911190 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-07 00:02:37.957736 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=30c0e452-1dce-47c3-9e1c-54cbd4403fb6] 2026-04-07 00:02:37.967796 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-07 00:02:38.323851 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=80b2d921-cddb-4cb4-8b9d-0e9678df886e] 2026-04-07 00:02:38.338489 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-07 00:02:38.910757 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=aac85223-cab2-4fcf-8d5a-45eefc46a8f7] 2026-04-07 00:02:38.910853 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=08f08186-40b2-405a-a08d-f1dce838a84f] 2026-04-07 00:02:38.914110 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-07 00:02:38.916096 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-07 00:02:39.019877 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=18b75ecb-33b6-45af-838e-ee749185598c] 2026-04-07 00:02:39.028734 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-07 00:02:39.043645 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=c14b769c-4685-41c0-bb07-09cfe1c60b9f] 2026-04-07 00:02:39.051936 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-07 00:02:39.221968 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a392e08e-f648-414a-b5c1-fc9973af925f] 2026-04-07 00:02:39.230855 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-07 00:02:39.317154 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d08b2b51-fc96-433c-8b72-1ee69bfeaba4] 2026-04-07 00:02:39.321939 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-07 00:02:39.492831 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=fc226c3b-5432-4278-ac89-6e3f33d7e6b6] 2026-04-07 00:02:39.583033 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=58923708-ef16-47a1-9d0e-7d1dfed940a7] 2026-04-07 00:02:39.639904 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=1980d88b-207e-455b-8233-b54a4b425c02] 2026-04-07 00:02:40.271446 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=796dde89-e43e-420a-ac93-31afabfe1b59] 2026-04-07 00:02:40.286805 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=a1fa3ccf-0fb8-4c83-b495-acb11da6a9bf] 2026-04-07 00:02:40.647733 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=e7051c4a-f349-41e0-bd05-6ad1d587de79] 2026-04-07 00:02:40.832983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=14916ca5-7481-42fc-8540-136ebd3d0584] 2026-04-07 00:02:41.072192 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 3s [id=2bf5ea01-1d30-4bd8-99cb-f6ae9fa0c833] 2026-04-07 00:02:41.230521 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=76074d2d-6ede-4e85-a115-095cf30f60f9] 2026-04-07 00:02:45.182738 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=58fee2c8-fc4e-4ac1-bda0-6ce0304c97f5] 2026-04-07 00:02:45.195652 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-07 00:02:45.214113 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-07 00:02:45.214694 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-07 00:02:45.217519 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-07 00:02:45.217674 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-07 00:02:45.224975 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-07 00:02:45.231960 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-07 00:02:47.011778 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=c2843b17-c536-4171-97f0-1cdc3715d72e] 2026-04-07 00:02:47.017426 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-07 00:02:47.024013 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-07 00:02:47.024852 | orchestrator | local_file.inventory: Creating... 2026-04-07 00:02:47.030107 | orchestrator | local_file.inventory: Creation complete after 0s [id=6fbb1d28f6f5a721b631e25b3c6de233f39f951b] 2026-04-07 00:02:47.030171 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1fdeb8b417b64fd8b481f85685e6b90652a4172e] 2026-04-07 00:02:47.850548 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c2843b17-c536-4171-97f0-1cdc3715d72e] 2026-04-07 00:02:55.217689 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-07 00:02:55.217790 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-07 00:02:55.218801 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-07 00:02:55.218852 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-07 00:02:55.226347 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-07 00:02:55.232531 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-07 00:03:05.226744 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-07 00:03:05.226869 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-07 00:03:05.226887 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-07 00:03:05.226912 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-07 00:03:05.226924 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-07 00:03:05.233476 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-07 00:03:15.235702 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-07 00:03:15.235784 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-07 00:03:15.235791 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-07 00:03:15.235796 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-07 00:03:15.235801 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-07 00:03:15.235805 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-07 00:03:16.271223 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=85a7e9c5-ff7c-4731-9561-54168fbe1822] 2026-04-07 00:03:25.244440 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-07 00:03:25.244581 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-07 00:03:25.244619 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-07 00:03:25.244639 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-07 00:03:25.244707 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-07 00:03:35.250325 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-07 00:03:35.250435 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-07 00:03:35.250450 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-07 00:03:35.250454 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-04-07 00:03:35.250459 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-07 00:03:36.181618 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=f7ff17cf-29ef-406a-8e82-40ef16ccf501] 2026-04-07 00:03:45.258950 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-04-07 00:03:45.259108 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-04-07 00:03:45.259132 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-04-07 00:03:45.259141 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-04-07 00:03:55.266265 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m10s elapsed] 2026-04-07 00:03:55.266347 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m10s elapsed] 2026-04-07 00:03:55.266355 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-04-07 00:03:55.266360 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-04-07 00:03:56.292940 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m11s [id=c3f6c337-2777-40b6-acbf-7a4c5b53ccb1] 2026-04-07 00:03:56.351403 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m11s [id=afd8a3f9-542a-4c5c-a7be-2acfc1abc122] 2026-04-07 00:03:56.545166 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m12s [id=cd3545af-8534-495f-acd0-f5e1a5950b2e] 2026-04-07 00:04:05.267514 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m20s elapsed] 2026-04-07 00:04:06.921167 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m22s [id=6d0464dc-9b24-4a5a-9040-034d7b22b879] 2026-04-07 00:04:06.938380 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-07 00:04:06.946309 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=551880088022268547] 2026-04-07 00:04:06.948924 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-07 00:04:06.954297 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-07 00:04:06.961122 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-07 00:04:06.962404 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-07 00:04:06.966739 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-07 00:04:06.966882 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-07 00:04:06.968482 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-07 00:04:06.986168 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-07 00:04:06.989998 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-07 00:04:06.990436 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-07 00:04:10.370860 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=f7ff17cf-29ef-406a-8e82-40ef16ccf501/c3ad8b00-5bc8-428f-af67-6bd1265a9b39] 2026-04-07 00:04:10.386129 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=6d0464dc-9b24-4a5a-9040-034d7b22b879/55495174-9adc-4a3f-978b-4142e2213b73] 2026-04-07 00:04:10.422077 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=c3f6c337-2777-40b6-acbf-7a4c5b53ccb1/89661b50-0f8c-4be3-a02e-39629210b15c] 2026-04-07 00:04:10.439415 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=f7ff17cf-29ef-406a-8e82-40ef16ccf501/fa777649-5680-4322-b615-3bf8b4a5ab2e] 2026-04-07 00:04:10.452425 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=6d0464dc-9b24-4a5a-9040-034d7b22b879/3172f6cd-16a6-47ae-9a74-28bff05f52e4] 2026-04-07 00:04:10.466545 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=c3f6c337-2777-40b6-acbf-7a4c5b53ccb1/51e4949c-955e-4de9-a772-15b9aebb09fe] 2026-04-07 00:04:16.576932 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=f7ff17cf-29ef-406a-8e82-40ef16ccf501/fad897de-4fc3-471c-b210-14b98141fe30] 2026-04-07 00:04:16.592713 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=6d0464dc-9b24-4a5a-9040-034d7b22b879/e2189674-a553-4d5d-8fd8-5508ff437706] 2026-04-07 00:04:16.612424 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=c3f6c337-2777-40b6-acbf-7a4c5b53ccb1/01ab1f04-e59c-4d36-99ed-1bd22a22bd9d] 2026-04-07 00:04:16.990827 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-07 00:04:26.991289 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-07 00:04:27.478170 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=0ef1343e-7257-48bb-9fe2-1ffd26f47e9b] 2026-04-07 00:04:30.693030 | orchestrator | 2026-04-07 00:04:30.693121 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-07 00:04:30.693130 | orchestrator | 2026-04-07 00:04:30.693135 | orchestrator | Outputs: 2026-04-07 00:04:30.693139 | orchestrator | 2026-04-07 00:04:30.693143 | orchestrator | manager_address = 2026-04-07 00:04:30.693148 | orchestrator | private_key = 2026-04-07 00:04:31.045200 | orchestrator | ok: Runtime: 0:02:08.333224 2026-04-07 00:04:31.065329 | 2026-04-07 00:04:31.065478 | TASK [Create infrastructure (stable)] 2026-04-07 00:04:31.601007 | orchestrator | skipping: Conditional result was False 2026-04-07 00:04:31.620057 | 2026-04-07 00:04:31.620258 | TASK [Fetch manager address] 2026-04-07 00:04:32.139073 | orchestrator | ok 2026-04-07 00:04:32.148020 | 2026-04-07 00:04:32.148141 | TASK [Set manager_host address] 2026-04-07 00:04:32.231275 | orchestrator | ok 2026-04-07 00:04:32.242674 | 2026-04-07 00:04:32.242824 | LOOP [Update ansible collections] 2026-04-07 00:04:33.732706 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 00:04:33.733147 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-07 00:04:33.733215 | orchestrator | Starting galaxy collection install process 2026-04-07 00:04:33.733259 | orchestrator | Process install dependency map 2026-04-07 00:04:33.733297 | orchestrator | Starting collection install process 2026-04-07 00:04:33.733333 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-04-07 00:04:33.733400 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-04-07 00:04:33.733453 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-07 00:04:33.733538 | orchestrator | ok: Item: commons Runtime: 0:00:01.063695 2026-04-07 00:04:35.471218 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-07 00:04:35.471382 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 00:04:35.471427 | orchestrator | Starting galaxy collection install process 2026-04-07 00:04:35.471458 | orchestrator | Process install dependency map 2026-04-07 00:04:35.471487 | orchestrator | Starting collection install process 2026-04-07 00:04:35.471514 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-04-07 00:04:35.471541 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-04-07 00:04:35.471568 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-07 00:04:35.471610 | orchestrator | ok: Item: services Runtime: 0:00:01.463530 2026-04-07 00:04:35.497591 | 2026-04-07 00:04:35.497754 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-07 00:04:46.065164 | orchestrator | ok 2026-04-07 00:04:46.076170 | 2026-04-07 00:04:46.076410 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-07 00:05:46.114324 | orchestrator | ok 2026-04-07 00:05:46.124899 | 2026-04-07 00:05:46.125019 | TASK [Fetch manager ssh hostkey] 2026-04-07 00:05:47.699036 | orchestrator | Output suppressed because no_log was given 2026-04-07 00:05:47.712774 | 2026-04-07 00:05:47.712942 | TASK [Get ssh keypair from terraform environment] 2026-04-07 00:05:48.249710 | orchestrator | ok: Runtime: 0:00:00.006345 2026-04-07 00:05:48.266537 | 2026-04-07 00:05:48.266702 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-07 00:05:48.299238 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-07 00:05:48.307460 | 2026-04-07 00:05:48.307638 | TASK [Run manager part 0] 2026-04-07 00:05:49.598396 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 00:05:49.657009 | orchestrator | 2026-04-07 00:05:49.657077 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-07 00:05:49.657088 | orchestrator | 2026-04-07 00:05:49.657109 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-07 00:05:51.588401 | orchestrator | ok: [testbed-manager] 2026-04-07 00:05:51.588458 | orchestrator | 2026-04-07 00:05:51.588481 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-07 00:05:51.588490 | orchestrator | 2026-04-07 00:05:51.588499 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:05:53.522242 | orchestrator | ok: [testbed-manager] 2026-04-07 00:05:53.522314 | orchestrator | 2026-04-07 00:05:53.522328 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-07 00:05:54.249096 | orchestrator | ok: [testbed-manager] 2026-04-07 00:05:54.249231 | orchestrator | 2026-04-07 00:05:54.249245 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-07 00:05:54.295996 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:05:54.296538 | orchestrator | 2026-04-07 00:05:54.296552 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-07 00:05:54.333881 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:05:54.333966 | orchestrator | 2026-04-07 00:05:54.333975 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-07 00:05:54.379406 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:05:54.379478 | orchestrator | 2026-04-07 00:05:54.379488 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-07 00:05:55.132288 | orchestrator | changed: [testbed-manager] 2026-04-07 00:05:55.132363 | orchestrator | 2026-04-07 00:05:55.132374 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-07 00:08:37.425810 | orchestrator | changed: [testbed-manager] 2026-04-07 00:08:37.425922 | orchestrator | 2026-04-07 00:08:37.425940 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-07 00:10:16.945552 | orchestrator | changed: [testbed-manager] 2026-04-07 00:10:16.945805 | orchestrator | 2026-04-07 00:10:16.945832 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-07 00:10:39.708969 | orchestrator | changed: [testbed-manager] 2026-04-07 00:10:39.709063 | orchestrator | 2026-04-07 00:10:39.709081 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-07 00:10:48.996517 | orchestrator | changed: [testbed-manager] 2026-04-07 00:10:48.996558 | orchestrator | 2026-04-07 00:10:48.996592 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-07 00:10:49.032390 | orchestrator | ok: [testbed-manager] 2026-04-07 00:10:49.032435 | orchestrator | 2026-04-07 00:10:49.032444 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-07 00:10:49.814885 | orchestrator | ok: [testbed-manager] 2026-04-07 00:10:49.814928 | orchestrator | 2026-04-07 00:10:49.814934 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-07 00:10:50.544458 | orchestrator | changed: [testbed-manager] 2026-04-07 00:10:50.544498 | orchestrator | 2026-04-07 00:10:50.544684 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-07 00:10:56.487928 | orchestrator | changed: [testbed-manager] 2026-04-07 00:10:56.487968 | orchestrator | 2026-04-07 00:10:56.487976 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-07 00:11:02.357893 | orchestrator | changed: [testbed-manager] 2026-04-07 00:11:02.357946 | orchestrator | 2026-04-07 00:11:02.357953 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-07 00:11:05.007412 | orchestrator | changed: [testbed-manager] 2026-04-07 00:11:05.007483 | orchestrator | 2026-04-07 00:11:05.007499 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-07 00:11:06.771233 | orchestrator | changed: [testbed-manager] 2026-04-07 00:11:06.771340 | orchestrator | 2026-04-07 00:11:06.771364 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-07 00:11:07.802480 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-07 00:11:07.802634 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-07 00:11:07.802652 | orchestrator | 2026-04-07 00:11:07.802670 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-07 00:11:07.849398 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-07 00:11:07.849491 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-07 00:11:07.849509 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-07 00:11:07.849524 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-07 00:11:10.870637 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-07 00:11:10.870745 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-07 00:11:10.870769 | orchestrator | 2026-04-07 00:11:10.870791 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-07 00:11:11.404999 | orchestrator | changed: [testbed-manager] 2026-04-07 00:11:11.405089 | orchestrator | 2026-04-07 00:11:11.405105 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-07 00:16:34.852673 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-07 00:16:34.852792 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-07 00:16:34.852819 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-07 00:16:34.852841 | orchestrator | 2026-04-07 00:16:34.852863 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-07 00:16:37.207802 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-07 00:16:37.207838 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-07 00:16:37.207843 | orchestrator | 2026-04-07 00:16:37.207849 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-07 00:16:37.207857 | orchestrator | 2026-04-07 00:16:37.207863 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:16:38.602320 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:38.602371 | orchestrator | 2026-04-07 00:16:38.602380 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-07 00:16:38.649570 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:38.649631 | orchestrator | 2026-04-07 00:16:38.649647 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-07 00:16:38.714640 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:38.714716 | orchestrator | 2026-04-07 00:16:38.714731 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-07 00:16:39.500153 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:39.500250 | orchestrator | 2026-04-07 00:16:39.500305 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-07 00:16:40.223140 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:40.223228 | orchestrator | 2026-04-07 00:16:40.223244 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-07 00:16:41.610721 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-07 00:16:41.610781 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-07 00:16:41.610793 | orchestrator | 2026-04-07 00:16:41.610806 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-07 00:16:42.988544 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:42.988657 | orchestrator | 2026-04-07 00:16:42.988675 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-07 00:16:44.687307 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:16:44.687401 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-07 00:16:44.687431 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:16:44.687443 | orchestrator | 2026-04-07 00:16:44.687462 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-07 00:16:44.742944 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:44.743021 | orchestrator | 2026-04-07 00:16:44.743034 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-07 00:16:44.810063 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:44.810145 | orchestrator | 2026-04-07 00:16:44.810162 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-07 00:16:45.348696 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:45.348780 | orchestrator | 2026-04-07 00:16:45.348793 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-07 00:16:45.418185 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:45.418294 | orchestrator | 2026-04-07 00:16:45.418311 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-07 00:16:46.268038 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:16:46.268131 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:46.268148 | orchestrator | 2026-04-07 00:16:46.268161 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-07 00:16:46.306566 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:46.306645 | orchestrator | 2026-04-07 00:16:46.306660 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-07 00:16:46.342798 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:46.342899 | orchestrator | 2026-04-07 00:16:46.342923 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-07 00:16:46.380638 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:46.380732 | orchestrator | 2026-04-07 00:16:46.380753 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-07 00:16:46.454150 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:46.454254 | orchestrator | 2026-04-07 00:16:46.454330 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-07 00:16:47.196683 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:47.196741 | orchestrator | 2026-04-07 00:16:47.196747 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-07 00:16:47.196753 | orchestrator | 2026-04-07 00:16:47.196759 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:16:48.555295 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:48.555367 | orchestrator | 2026-04-07 00:16:48.555383 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-07 00:16:49.514823 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:49.514922 | orchestrator | 2026-04-07 00:16:49.514938 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:16:49.514951 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-07 00:16:49.514963 | orchestrator | 2026-04-07 00:16:49.727439 | orchestrator | ok: Runtime: 0:11:00.948266 2026-04-07 00:16:49.748973 | 2026-04-07 00:16:49.749168 | TASK [Point out that the log in on the manager is now possible] 2026-04-07 00:16:49.799069 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-07 00:16:49.809552 | 2026-04-07 00:16:49.809692 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-07 00:16:49.845933 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-07 00:16:49.855420 | 2026-04-07 00:16:49.855545 | TASK [Run manager part 1 + 2] 2026-04-07 00:16:50.697792 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 00:16:50.755982 | orchestrator | 2026-04-07 00:16:50.756028 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-07 00:16:50.756035 | orchestrator | 2026-04-07 00:16:50.756048 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:16:53.621363 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:53.621419 | orchestrator | 2026-04-07 00:16:53.621447 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-07 00:16:53.656749 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:53.656800 | orchestrator | 2026-04-07 00:16:53.656809 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-07 00:16:53.708273 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:53.708327 | orchestrator | 2026-04-07 00:16:53.708337 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 00:16:53.756656 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:53.756714 | orchestrator | 2026-04-07 00:16:53.756725 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 00:16:53.839987 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:53.840058 | orchestrator | 2026-04-07 00:16:53.840075 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 00:16:53.905078 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:53.905132 | orchestrator | 2026-04-07 00:16:53.905142 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 00:16:53.951986 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-07 00:16:53.952032 | orchestrator | 2026-04-07 00:16:53.952038 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 00:16:54.663757 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:54.663813 | orchestrator | 2026-04-07 00:16:54.663824 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 00:16:54.717924 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:16:54.717984 | orchestrator | 2026-04-07 00:16:54.717994 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 00:16:56.245082 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:56.245174 | orchestrator | 2026-04-07 00:16:56.245193 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 00:16:56.793574 | orchestrator | ok: [testbed-manager] 2026-04-07 00:16:56.793659 | orchestrator | 2026-04-07 00:16:56.793675 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 00:16:57.913160 | orchestrator | changed: [testbed-manager] 2026-04-07 00:16:57.913233 | orchestrator | 2026-04-07 00:16:57.913285 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 00:17:13.316610 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:13.316715 | orchestrator | 2026-04-07 00:17:13.316731 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-07 00:17:13.983210 | orchestrator | ok: [testbed-manager] 2026-04-07 00:17:13.983329 | orchestrator | 2026-04-07 00:17:13.983356 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-07 00:17:14.039102 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:17:14.039187 | orchestrator | 2026-04-07 00:17:14.039209 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-07 00:17:14.983684 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:14.983726 | orchestrator | 2026-04-07 00:17:14.983735 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-07 00:17:15.938855 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:15.938941 | orchestrator | 2026-04-07 00:17:15.938957 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-07 00:17:16.507716 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:16.507832 | orchestrator | 2026-04-07 00:17:16.507859 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-07 00:17:16.553438 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-07 00:17:16.553526 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-07 00:17:16.553537 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-07 00:17:16.553545 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-07 00:17:18.488372 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:18.488418 | orchestrator | 2026-04-07 00:17:18.488426 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-07 00:17:26.634974 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-07 00:17:26.635078 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-07 00:17:26.635095 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-07 00:17:26.635108 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-07 00:17:26.635127 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-07 00:17:26.635138 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-07 00:17:26.635149 | orchestrator | 2026-04-07 00:17:26.635162 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-07 00:17:27.625752 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:27.625794 | orchestrator | 2026-04-07 00:17:27.625802 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-07 00:17:30.610183 | orchestrator | changed: [testbed-manager] 2026-04-07 00:17:30.610387 | orchestrator | 2026-04-07 00:17:30.610404 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-07 00:17:30.656276 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:17:30.656386 | orchestrator | 2026-04-07 00:17:30.656400 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-07 00:19:08.060883 | orchestrator | changed: [testbed-manager] 2026-04-07 00:19:08.060922 | orchestrator | 2026-04-07 00:19:08.060928 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 00:19:09.058220 | orchestrator | ok: [testbed-manager] 2026-04-07 00:19:09.058311 | orchestrator | 2026-04-07 00:19:09.058329 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:19:09.058347 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-07 00:19:09.058364 | orchestrator | 2026-04-07 00:19:09.491207 | orchestrator | ok: Runtime: 0:02:19.001674 2026-04-07 00:19:09.508579 | 2026-04-07 00:19:09.508725 | TASK [Reboot manager] 2026-04-07 00:19:11.047970 | orchestrator | ok: Runtime: 0:00:00.885175 2026-04-07 00:19:11.066732 | 2026-04-07 00:19:11.066903 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-07 00:19:23.775538 | orchestrator | ok 2026-04-07 00:19:23.785444 | 2026-04-07 00:19:23.785575 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-07 00:20:23.828885 | orchestrator | ok 2026-04-07 00:20:23.836801 | 2026-04-07 00:20:23.836922 | TASK [Deploy manager + bootstrap nodes] 2026-04-07 00:20:26.095566 | orchestrator | 2026-04-07 00:20:26.095809 | orchestrator | # DEPLOY MANAGER 2026-04-07 00:20:26.095836 | orchestrator | 2026-04-07 00:20:26.095850 | orchestrator | + set -e 2026-04-07 00:20:26.095864 | orchestrator | + echo 2026-04-07 00:20:26.095878 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-07 00:20:26.095896 | orchestrator | + echo 2026-04-07 00:20:26.095947 | orchestrator | + cat /opt/manager-vars.sh 2026-04-07 00:20:26.098769 | orchestrator | export NUMBER_OF_NODES=6 2026-04-07 00:20:26.098847 | orchestrator | 2026-04-07 00:20:26.098859 | orchestrator | export CEPH_VERSION=reef 2026-04-07 00:20:26.098865 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-07 00:20:26.098871 | orchestrator | export MANAGER_VERSION=latest 2026-04-07 00:20:26.098885 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-07 00:20:26.098889 | orchestrator | 2026-04-07 00:20:26.098896 | orchestrator | export ARA=false 2026-04-07 00:20:26.098900 | orchestrator | export DEPLOY_MODE=manager 2026-04-07 00:20:26.098908 | orchestrator | export TEMPEST=true 2026-04-07 00:20:26.098912 | orchestrator | export IS_ZUUL=true 2026-04-07 00:20:26.098916 | orchestrator | 2026-04-07 00:20:26.098923 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:20:26.098927 | orchestrator | export EXTERNAL_API=false 2026-04-07 00:20:26.098931 | orchestrator | 2026-04-07 00:20:26.098935 | orchestrator | export IMAGE_USER=ubuntu 2026-04-07 00:20:26.098942 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-07 00:20:26.098946 | orchestrator | 2026-04-07 00:20:26.098950 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-07 00:20:26.099093 | orchestrator | 2026-04-07 00:20:26.099100 | orchestrator | + echo 2026-04-07 00:20:26.099107 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 00:20:26.099646 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 00:20:26.099658 | orchestrator | ++ INTERACTIVE=false 2026-04-07 00:20:26.099666 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 00:20:26.099673 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 00:20:26.099683 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 00:20:26.099687 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 00:20:26.099691 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 00:20:26.099797 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 00:20:26.099803 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 00:20:26.099807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 00:20:26.099811 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 00:20:26.099815 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-07 00:20:26.099818 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-07 00:20:26.099845 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 00:20:26.099857 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 00:20:26.099861 | orchestrator | ++ export ARA=false 2026-04-07 00:20:26.099865 | orchestrator | ++ ARA=false 2026-04-07 00:20:26.099869 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 00:20:26.099875 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 00:20:26.099881 | orchestrator | ++ export TEMPEST=true 2026-04-07 00:20:26.099953 | orchestrator | ++ TEMPEST=true 2026-04-07 00:20:26.099963 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 00:20:26.099968 | orchestrator | ++ IS_ZUUL=true 2026-04-07 00:20:26.099971 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:20:26.099975 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:20:26.099979 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 00:20:26.099983 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 00:20:26.099987 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 00:20:26.099992 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 00:20:26.099998 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 00:20:26.100003 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 00:20:26.100010 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 00:20:26.100016 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 00:20:26.100022 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-07 00:20:26.149406 | orchestrator | + docker version 2026-04-07 00:20:26.241759 | orchestrator | Client: Docker Engine - Community 2026-04-07 00:20:26.241864 | orchestrator | Version: 27.5.1 2026-04-07 00:20:26.241879 | orchestrator | API version: 1.47 2026-04-07 00:20:26.241893 | orchestrator | Go version: go1.22.11 2026-04-07 00:20:26.241905 | orchestrator | Git commit: 9f9e405 2026-04-07 00:20:26.241916 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-07 00:20:26.241928 | orchestrator | OS/Arch: linux/amd64 2026-04-07 00:20:26.241939 | orchestrator | Context: default 2026-04-07 00:20:26.241950 | orchestrator | 2026-04-07 00:20:26.241961 | orchestrator | Server: Docker Engine - Community 2026-04-07 00:20:26.241972 | orchestrator | Engine: 2026-04-07 00:20:26.241983 | orchestrator | Version: 27.5.1 2026-04-07 00:20:26.241995 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-07 00:20:26.242087 | orchestrator | Go version: go1.22.11 2026-04-07 00:20:26.242101 | orchestrator | Git commit: 4c9b3b0 2026-04-07 00:20:26.242112 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-07 00:20:26.242123 | orchestrator | OS/Arch: linux/amd64 2026-04-07 00:20:26.242134 | orchestrator | Experimental: false 2026-04-07 00:20:26.242145 | orchestrator | containerd: 2026-04-07 00:20:26.242156 | orchestrator | Version: v2.2.2 2026-04-07 00:20:26.242182 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-07 00:20:26.242197 | orchestrator | runc: 2026-04-07 00:20:26.242216 | orchestrator | Version: 1.3.4 2026-04-07 00:20:26.242234 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-07 00:20:26.242253 | orchestrator | docker-init: 2026-04-07 00:20:26.242272 | orchestrator | Version: 0.19.0 2026-04-07 00:20:26.242293 | orchestrator | GitCommit: de40ad0 2026-04-07 00:20:26.245171 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-07 00:20:26.254388 | orchestrator | + set -e 2026-04-07 00:20:26.254450 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 00:20:26.254473 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 00:20:26.254496 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 00:20:26.254517 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 00:20:26.254538 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 00:20:26.254559 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 00:20:26.254580 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 00:20:26.254638 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-07 00:20:26.254662 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-07 00:20:26.254683 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 00:20:26.254702 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 00:20:26.254722 | orchestrator | ++ export ARA=false 2026-04-07 00:20:26.254742 | orchestrator | ++ ARA=false 2026-04-07 00:20:26.254762 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 00:20:26.254782 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 00:20:26.254802 | orchestrator | ++ export TEMPEST=true 2026-04-07 00:20:26.254822 | orchestrator | ++ TEMPEST=true 2026-04-07 00:20:26.254842 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 00:20:26.254861 | orchestrator | ++ IS_ZUUL=true 2026-04-07 00:20:26.254881 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:20:26.254901 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:20:26.254931 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 00:20:26.254951 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 00:20:26.254971 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 00:20:26.254991 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 00:20:26.255003 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 00:20:26.255014 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 00:20:26.255025 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 00:20:26.255036 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 00:20:26.255047 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 00:20:26.255058 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 00:20:26.255069 | orchestrator | ++ INTERACTIVE=false 2026-04-07 00:20:26.255079 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 00:20:26.255094 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 00:20:26.255105 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-07 00:20:26.255115 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-07 00:20:26.255131 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-07 00:20:26.261782 | orchestrator | + set -e 2026-04-07 00:20:26.261833 | orchestrator | + VERSION=reef 2026-04-07 00:20:26.262516 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-07 00:20:26.268873 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-07 00:20:26.268919 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-07 00:20:26.274098 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-07 00:20:26.280027 | orchestrator | + set -e 2026-04-07 00:20:26.280070 | orchestrator | + VERSION=2024.2 2026-04-07 00:20:26.280591 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-07 00:20:26.285785 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-07 00:20:26.285862 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-07 00:20:26.290423 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-07 00:20:26.291300 | orchestrator | ++ semver latest 7.0.0 2026-04-07 00:20:26.347301 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:20:26.347402 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-07 00:20:26.347417 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-07 00:20:26.348376 | orchestrator | ++ semver latest 10.0.0-0 2026-04-07 00:20:26.405462 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:20:26.405974 | orchestrator | ++ semver 2024.2 2025.1 2026-04-07 00:20:26.460344 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:20:26.460446 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-07 00:20:26.544583 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 00:20:26.545641 | orchestrator | + source /opt/venv/bin/activate 2026-04-07 00:20:26.546861 | orchestrator | ++ deactivate nondestructive 2026-04-07 00:20:26.546890 | orchestrator | ++ '[' -n '' ']' 2026-04-07 00:20:26.546904 | orchestrator | ++ '[' -n '' ']' 2026-04-07 00:20:26.546923 | orchestrator | ++ hash -r 2026-04-07 00:20:26.546935 | orchestrator | ++ '[' -n '' ']' 2026-04-07 00:20:26.546946 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-07 00:20:26.546957 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-07 00:20:26.546972 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-07 00:20:26.547103 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-07 00:20:26.547118 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-07 00:20:26.547130 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-07 00:20:26.547141 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-07 00:20:26.547156 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 00:20:26.547470 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 00:20:26.547486 | orchestrator | ++ export PATH 2026-04-07 00:20:26.547497 | orchestrator | ++ '[' -n '' ']' 2026-04-07 00:20:26.547512 | orchestrator | ++ '[' -z '' ']' 2026-04-07 00:20:26.547523 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-07 00:20:26.547534 | orchestrator | ++ PS1='(venv) ' 2026-04-07 00:20:26.547545 | orchestrator | ++ export PS1 2026-04-07 00:20:26.547556 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-07 00:20:26.547568 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-07 00:20:26.547583 | orchestrator | ++ hash -r 2026-04-07 00:20:26.547802 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-07 00:20:27.527538 | orchestrator | 2026-04-07 00:20:27.527669 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-07 00:20:27.527681 | orchestrator | 2026-04-07 00:20:27.527688 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 00:20:28.015168 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:28.015282 | orchestrator | 2026-04-07 00:20:28.015299 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-07 00:20:28.860759 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:28.860899 | orchestrator | 2026-04-07 00:20:28.860919 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-07 00:20:28.860932 | orchestrator | 2026-04-07 00:20:28.860944 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:20:31.045242 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:31.045360 | orchestrator | 2026-04-07 00:20:31.045378 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-07 00:20:31.096488 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:31.096583 | orchestrator | 2026-04-07 00:20:31.096601 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-07 00:20:31.546658 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:31.546764 | orchestrator | 2026-04-07 00:20:31.546778 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-07 00:20:31.581313 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:20:31.581408 | orchestrator | 2026-04-07 00:20:31.581422 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-07 00:20:31.932556 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:31.932736 | orchestrator | 2026-04-07 00:20:31.932768 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-07 00:20:32.241886 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:32.242106 | orchestrator | 2026-04-07 00:20:32.242144 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-07 00:20:32.355066 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:20:32.355169 | orchestrator | 2026-04-07 00:20:32.355186 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-07 00:20:32.355199 | orchestrator | 2026-04-07 00:20:32.355210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:20:34.082802 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:34.082926 | orchestrator | 2026-04-07 00:20:34.082944 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-07 00:20:34.182182 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-07 00:20:34.182286 | orchestrator | 2026-04-07 00:20:34.182299 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-07 00:20:34.231379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-07 00:20:34.231492 | orchestrator | 2026-04-07 00:20:34.231514 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-07 00:20:35.206412 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-07 00:20:35.206528 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-07 00:20:35.206549 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-07 00:20:35.206565 | orchestrator | 2026-04-07 00:20:35.206581 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-07 00:20:36.779723 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-07 00:20:36.779836 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-07 00:20:36.779852 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-07 00:20:36.779866 | orchestrator | 2026-04-07 00:20:36.779889 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-07 00:20:37.336927 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:20:37.337017 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:37.337029 | orchestrator | 2026-04-07 00:20:37.337038 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-07 00:20:37.895460 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:20:37.895565 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:37.895584 | orchestrator | 2026-04-07 00:20:37.895597 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-07 00:20:37.942993 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:20:37.943081 | orchestrator | 2026-04-07 00:20:37.943096 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-07 00:20:38.255697 | orchestrator | ok: [testbed-manager] 2026-04-07 00:20:38.255798 | orchestrator | 2026-04-07 00:20:38.255815 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-07 00:20:38.310331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-07 00:20:38.310428 | orchestrator | 2026-04-07 00:20:38.310444 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-07 00:20:39.302815 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:39.302944 | orchestrator | 2026-04-07 00:20:39.302963 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-07 00:20:40.031894 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:40.031989 | orchestrator | 2026-04-07 00:20:40.032007 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-07 00:20:58.586701 | orchestrator | changed: [testbed-manager] 2026-04-07 00:20:58.586797 | orchestrator | 2026-04-07 00:20:58.586830 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-07 00:20:58.635797 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:20:58.635905 | orchestrator | 2026-04-07 00:20:58.635933 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-07 00:20:58.635947 | orchestrator | 2026-04-07 00:20:58.635959 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:21:00.467097 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:00.467207 | orchestrator | 2026-04-07 00:21:00.467250 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-07 00:21:00.577899 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-07 00:21:00.577979 | orchestrator | 2026-04-07 00:21:00.577988 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-07 00:21:00.638999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:21:00.639102 | orchestrator | 2026-04-07 00:21:00.639119 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-07 00:21:02.964754 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:02.964889 | orchestrator | 2026-04-07 00:21:02.964918 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-07 00:21:03.023256 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:03.023353 | orchestrator | 2026-04-07 00:21:03.023367 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-07 00:21:03.150510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-07 00:21:03.150627 | orchestrator | 2026-04-07 00:21:03.150702 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-07 00:21:05.985054 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-07 00:21:05.985172 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-07 00:21:05.985197 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-07 00:21:05.985217 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-07 00:21:05.985237 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-07 00:21:05.985256 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-07 00:21:05.985275 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-07 00:21:05.985289 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-07 00:21:05.985300 | orchestrator | 2026-04-07 00:21:05.985312 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-07 00:21:06.609350 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:06.609452 | orchestrator | 2026-04-07 00:21:06.609469 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-07 00:21:07.246850 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:07.246952 | orchestrator | 2026-04-07 00:21:07.246968 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-07 00:21:07.317433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-07 00:21:07.317536 | orchestrator | 2026-04-07 00:21:07.317552 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-07 00:21:08.537168 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-07 00:21:08.537272 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-07 00:21:08.537287 | orchestrator | 2026-04-07 00:21:08.537299 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-07 00:21:09.195282 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:09.195405 | orchestrator | 2026-04-07 00:21:09.195434 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-07 00:21:09.254933 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:21:09.255033 | orchestrator | 2026-04-07 00:21:09.255049 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-07 00:21:09.334095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-07 00:21:09.334195 | orchestrator | 2026-04-07 00:21:09.334211 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-07 00:21:09.980984 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:09.981079 | orchestrator | 2026-04-07 00:21:09.981095 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-07 00:21:10.047172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-07 00:21:10.047317 | orchestrator | 2026-04-07 00:21:10.047333 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-07 00:21:11.374157 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:21:11.374266 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:21:11.374283 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:11.374297 | orchestrator | 2026-04-07 00:21:11.374310 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-07 00:21:12.001560 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:12.001661 | orchestrator | 2026-04-07 00:21:12.001726 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-07 00:21:12.062216 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:21:12.062308 | orchestrator | 2026-04-07 00:21:12.062323 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-07 00:21:12.153939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-07 00:21:12.154087 | orchestrator | 2026-04-07 00:21:12.154105 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-07 00:21:12.654873 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:12.654973 | orchestrator | 2026-04-07 00:21:12.655011 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-07 00:21:13.061543 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:13.061644 | orchestrator | 2026-04-07 00:21:13.061662 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-07 00:21:14.270135 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-07 00:21:14.270275 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-07 00:21:14.270305 | orchestrator | 2026-04-07 00:21:14.270325 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-07 00:21:14.921739 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:14.921812 | orchestrator | 2026-04-07 00:21:14.921819 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-07 00:21:15.302991 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:15.303120 | orchestrator | 2026-04-07 00:21:15.303146 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-07 00:21:15.656804 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:15.656908 | orchestrator | 2026-04-07 00:21:15.656923 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-07 00:21:15.709110 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:21:15.709205 | orchestrator | 2026-04-07 00:21:15.709220 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-07 00:21:15.789570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-07 00:21:15.789764 | orchestrator | 2026-04-07 00:21:15.789795 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-07 00:21:15.833058 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:15.833149 | orchestrator | 2026-04-07 00:21:15.833163 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-07 00:21:17.838157 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-07 00:21:17.838278 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-07 00:21:17.838300 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-07 00:21:17.838318 | orchestrator | 2026-04-07 00:21:17.838337 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-07 00:21:18.518756 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:18.518860 | orchestrator | 2026-04-07 00:21:18.518876 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-07 00:21:19.222170 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:19.222273 | orchestrator | 2026-04-07 00:21:19.222290 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-07 00:21:19.922769 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:19.922857 | orchestrator | 2026-04-07 00:21:19.922869 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-07 00:21:20.001123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-07 00:21:20.001220 | orchestrator | 2026-04-07 00:21:20.001234 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-07 00:21:20.042412 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:20.042503 | orchestrator | 2026-04-07 00:21:20.042517 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-07 00:21:20.732303 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-07 00:21:20.732406 | orchestrator | 2026-04-07 00:21:20.732421 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-07 00:21:20.808549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-07 00:21:20.808645 | orchestrator | 2026-04-07 00:21:20.808659 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-07 00:21:21.521223 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:21.521325 | orchestrator | 2026-04-07 00:21:21.521341 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-07 00:21:22.142955 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:22.143028 | orchestrator | 2026-04-07 00:21:22.143034 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-07 00:21:22.202796 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:21:22.202884 | orchestrator | 2026-04-07 00:21:22.202897 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-07 00:21:22.259591 | orchestrator | ok: [testbed-manager] 2026-04-07 00:21:22.259762 | orchestrator | 2026-04-07 00:21:22.259781 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-07 00:21:23.084052 | orchestrator | changed: [testbed-manager] 2026-04-07 00:21:23.084189 | orchestrator | 2026-04-07 00:21:23.084217 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-07 00:22:33.571743 | orchestrator | changed: [testbed-manager] 2026-04-07 00:22:33.571926 | orchestrator | 2026-04-07 00:22:33.571953 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-07 00:22:34.525234 | orchestrator | ok: [testbed-manager] 2026-04-07 00:22:34.525335 | orchestrator | 2026-04-07 00:22:34.525351 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-07 00:22:34.573110 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:22:34.573218 | orchestrator | 2026-04-07 00:22:34.573240 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-07 00:22:37.235625 | orchestrator | changed: [testbed-manager] 2026-04-07 00:22:37.235730 | orchestrator | 2026-04-07 00:22:37.235748 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-07 00:22:37.361265 | orchestrator | ok: [testbed-manager] 2026-04-07 00:22:37.361383 | orchestrator | 2026-04-07 00:22:37.361422 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 00:22:37.361437 | orchestrator | 2026-04-07 00:22:37.361449 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-07 00:22:37.412198 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:22:37.412294 | orchestrator | 2026-04-07 00:22:37.412309 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-07 00:23:37.461348 | orchestrator | Pausing for 60 seconds 2026-04-07 00:23:37.461441 | orchestrator | changed: [testbed-manager] 2026-04-07 00:23:37.461455 | orchestrator | 2026-04-07 00:23:37.461466 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-07 00:23:40.471952 | orchestrator | changed: [testbed-manager] 2026-04-07 00:23:40.472061 | orchestrator | 2026-04-07 00:23:40.472079 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-07 00:24:42.444254 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-07 00:24:42.444369 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-07 00:24:42.444386 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-07 00:24:42.444424 | orchestrator | changed: [testbed-manager] 2026-04-07 00:24:42.444439 | orchestrator | 2026-04-07 00:24:42.444450 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-07 00:24:48.048308 | orchestrator | changed: [testbed-manager] 2026-04-07 00:24:48.048437 | orchestrator | 2026-04-07 00:24:48.048457 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-07 00:24:48.123927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-07 00:24:48.124030 | orchestrator | 2026-04-07 00:24:48.124046 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 00:24:48.124059 | orchestrator | 2026-04-07 00:24:48.124070 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-07 00:24:48.179992 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:24:48.180088 | orchestrator | 2026-04-07 00:24:48.180104 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-07 00:24:48.253366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-07 00:24:48.253459 | orchestrator | 2026-04-07 00:24:48.253473 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-07 00:24:49.093433 | orchestrator | changed: [testbed-manager] 2026-04-07 00:24:49.093562 | orchestrator | 2026-04-07 00:24:49.093589 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-07 00:24:52.242377 | orchestrator | ok: [testbed-manager] 2026-04-07 00:24:52.242505 | orchestrator | 2026-04-07 00:24:52.242534 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-07 00:24:52.319432 | orchestrator | ok: [testbed-manager] => { 2026-04-07 00:24:52.319533 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-07 00:24:52.319550 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-07 00:24:52.319565 | orchestrator | "Checking running containers against expected versions...", 2026-04-07 00:24:52.319577 | orchestrator | "", 2026-04-07 00:24:52.319589 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-07 00:24:52.319601 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-07 00:24:52.319614 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.319632 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-07 00:24:52.319649 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.319660 | orchestrator | "", 2026-04-07 00:24:52.319672 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-07 00:24:52.319683 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-07 00:24:52.319694 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.319705 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-07 00:24:52.319716 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.319727 | orchestrator | "", 2026-04-07 00:24:52.319738 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-07 00:24:52.319749 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-07 00:24:52.319760 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.319779 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-07 00:24:52.319791 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.319802 | orchestrator | "", 2026-04-07 00:24:52.319813 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-07 00:24:52.319865 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-07 00:24:52.319879 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.319891 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-07 00:24:52.319902 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.319913 | orchestrator | "", 2026-04-07 00:24:52.319924 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-07 00:24:52.319962 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-07 00:24:52.319976 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.319988 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-07 00:24:52.320001 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320014 | orchestrator | "", 2026-04-07 00:24:52.320026 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-07 00:24:52.320039 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320052 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320065 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320078 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320091 | orchestrator | "", 2026-04-07 00:24:52.320103 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-07 00:24:52.320116 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 00:24:52.320129 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320142 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 00:24:52.320155 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320167 | orchestrator | "", 2026-04-07 00:24:52.320180 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-07 00:24:52.320193 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 00:24:52.320205 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320218 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 00:24:52.320240 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320254 | orchestrator | "", 2026-04-07 00:24:52.320266 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-07 00:24:52.320283 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-07 00:24:52.320296 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320309 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-07 00:24:52.320321 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320332 | orchestrator | "", 2026-04-07 00:24:52.320343 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-07 00:24:52.320353 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 00:24:52.320364 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320375 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 00:24:52.320386 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320396 | orchestrator | "", 2026-04-07 00:24:52.320407 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-07 00:24:52.320418 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320429 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320440 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320451 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320461 | orchestrator | "", 2026-04-07 00:24:52.320472 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-07 00:24:52.320483 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320494 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320505 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320516 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320527 | orchestrator | "", 2026-04-07 00:24:52.320537 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-07 00:24:52.320548 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320559 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320570 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320580 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320591 | orchestrator | "", 2026-04-07 00:24:52.320602 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-07 00:24:52.320613 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320624 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320642 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320654 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320665 | orchestrator | "", 2026-04-07 00:24:52.320675 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-07 00:24:52.320706 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320718 | orchestrator | " Enabled: true", 2026-04-07 00:24:52.320729 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-07 00:24:52.320740 | orchestrator | " Status: ✅ MATCH", 2026-04-07 00:24:52.320751 | orchestrator | "", 2026-04-07 00:24:52.320762 | orchestrator | "=== Summary ===", 2026-04-07 00:24:52.320772 | orchestrator | "Errors (version mismatches): 0", 2026-04-07 00:24:52.320783 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-07 00:24:52.320794 | orchestrator | "", 2026-04-07 00:24:52.320805 | orchestrator | "✅ All running containers match expected versions!" 2026-04-07 00:24:52.320816 | orchestrator | ] 2026-04-07 00:24:52.320903 | orchestrator | } 2026-04-07 00:24:52.320918 | orchestrator | 2026-04-07 00:24:52.320929 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-07 00:24:52.385231 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:24:52.385326 | orchestrator | 2026-04-07 00:24:52.385350 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:24:52.385372 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-07 00:24:52.385395 | orchestrator | 2026-04-07 00:24:52.478710 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 00:24:52.478801 | orchestrator | + deactivate 2026-04-07 00:24:52.478816 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-07 00:24:52.478881 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 00:24:52.478893 | orchestrator | + export PATH 2026-04-07 00:24:52.478904 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-07 00:24:52.478914 | orchestrator | + '[' -n '' ']' 2026-04-07 00:24:52.478924 | orchestrator | + hash -r 2026-04-07 00:24:52.478934 | orchestrator | + '[' -n '' ']' 2026-04-07 00:24:52.478944 | orchestrator | + unset VIRTUAL_ENV 2026-04-07 00:24:52.478954 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-07 00:24:52.479177 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-07 00:24:52.479196 | orchestrator | + unset -f deactivate 2026-04-07 00:24:52.479207 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-07 00:24:52.485204 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 00:24:52.485290 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-07 00:24:52.485310 | orchestrator | + local max_attempts=60 2026-04-07 00:24:52.485328 | orchestrator | + local name=ceph-ansible 2026-04-07 00:24:52.485344 | orchestrator | + local attempt_num=1 2026-04-07 00:24:52.485806 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:24:52.515633 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:24:52.515713 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-07 00:24:52.515724 | orchestrator | + local max_attempts=60 2026-04-07 00:24:52.515734 | orchestrator | + local name=kolla-ansible 2026-04-07 00:24:52.515743 | orchestrator | + local attempt_num=1 2026-04-07 00:24:52.516327 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-07 00:24:52.545891 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:24:52.546190 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-07 00:24:52.546222 | orchestrator | + local max_attempts=60 2026-04-07 00:24:52.546235 | orchestrator | + local name=osism-ansible 2026-04-07 00:24:52.546246 | orchestrator | + local attempt_num=1 2026-04-07 00:24:52.546272 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-07 00:24:52.578109 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:24:52.578190 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-07 00:24:52.578203 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-07 00:24:53.247757 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-07 00:24:53.424944 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-07 00:24:53.425074 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 00:24:53.425090 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 00:24:53.425102 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-07 00:24:53.425115 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-04-07 00:24:53.425127 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-07 00:24:53.425138 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-07 00:24:53.425149 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-07 00:24:53.425178 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-07 00:24:53.425190 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-07 00:24:53.425201 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-07 00:24:53.425212 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-07 00:24:53.425223 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 00:24:53.425234 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-07 00:24:53.425245 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-07 00:24:53.425256 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-07 00:24:53.431651 | orchestrator | ++ semver latest 7.0.0 2026-04-07 00:24:53.488278 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:24:53.488380 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-07 00:24:53.488402 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-07 00:24:53.493118 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-07 00:25:05.909199 | orchestrator | 2026-04-07 00:25:05 | INFO  | Prepare task for execution of resolvconf. 2026-04-07 00:25:06.097156 | orchestrator | 2026-04-07 00:25:06 | INFO  | Task 36b84898-1694-431d-9224-b13f91e93ca0 (resolvconf) was prepared for execution. 2026-04-07 00:25:06.097315 | orchestrator | 2026-04-07 00:25:06 | INFO  | It takes a moment until task 36b84898-1694-431d-9224-b13f91e93ca0 (resolvconf) has been started and output is visible here. 2026-04-07 00:25:19.298316 | orchestrator | 2026-04-07 00:25:19.298467 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-07 00:25:19.298496 | orchestrator | 2026-04-07 00:25:19.298515 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:25:19.298532 | orchestrator | Tuesday 07 April 2026 00:25:09 +0000 (0:00:00.191) 0:00:00.191 ********* 2026-04-07 00:25:19.299431 | orchestrator | ok: [testbed-manager] 2026-04-07 00:25:19.299487 | orchestrator | 2026-04-07 00:25:19.299509 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-07 00:25:19.299530 | orchestrator | Tuesday 07 April 2026 00:25:13 +0000 (0:00:03.829) 0:00:04.021 ********* 2026-04-07 00:25:19.299548 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:25:19.299567 | orchestrator | 2026-04-07 00:25:19.299585 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-07 00:25:19.299603 | orchestrator | Tuesday 07 April 2026 00:25:13 +0000 (0:00:00.056) 0:00:04.077 ********* 2026-04-07 00:25:19.299621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-07 00:25:19.299640 | orchestrator | 2026-04-07 00:25:19.299658 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-07 00:25:19.299675 | orchestrator | Tuesday 07 April 2026 00:25:13 +0000 (0:00:00.080) 0:00:04.157 ********* 2026-04-07 00:25:19.299693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:25:19.299710 | orchestrator | 2026-04-07 00:25:19.299747 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-07 00:25:19.299765 | orchestrator | Tuesday 07 April 2026 00:25:13 +0000 (0:00:00.076) 0:00:04.234 ********* 2026-04-07 00:25:19.299783 | orchestrator | ok: [testbed-manager] 2026-04-07 00:25:19.299800 | orchestrator | 2026-04-07 00:25:19.299818 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-07 00:25:19.299873 | orchestrator | Tuesday 07 April 2026 00:25:14 +0000 (0:00:01.178) 0:00:05.412 ********* 2026-04-07 00:25:19.299891 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:25:19.299907 | orchestrator | 2026-04-07 00:25:19.299922 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-07 00:25:19.299939 | orchestrator | Tuesday 07 April 2026 00:25:14 +0000 (0:00:00.067) 0:00:05.480 ********* 2026-04-07 00:25:19.299955 | orchestrator | ok: [testbed-manager] 2026-04-07 00:25:19.299971 | orchestrator | 2026-04-07 00:25:19.299987 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-07 00:25:19.300002 | orchestrator | Tuesday 07 April 2026 00:25:15 +0000 (0:00:00.532) 0:00:06.012 ********* 2026-04-07 00:25:19.300018 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:25:19.300034 | orchestrator | 2026-04-07 00:25:19.300050 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-07 00:25:19.300100 | orchestrator | Tuesday 07 April 2026 00:25:15 +0000 (0:00:00.077) 0:00:06.090 ********* 2026-04-07 00:25:19.300117 | orchestrator | changed: [testbed-manager] 2026-04-07 00:25:19.300134 | orchestrator | 2026-04-07 00:25:19.300148 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-07 00:25:19.300164 | orchestrator | Tuesday 07 April 2026 00:25:15 +0000 (0:00:00.598) 0:00:06.689 ********* 2026-04-07 00:25:19.300179 | orchestrator | changed: [testbed-manager] 2026-04-07 00:25:19.300195 | orchestrator | 2026-04-07 00:25:19.300211 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-07 00:25:19.300228 | orchestrator | Tuesday 07 April 2026 00:25:16 +0000 (0:00:01.089) 0:00:07.778 ********* 2026-04-07 00:25:19.300243 | orchestrator | ok: [testbed-manager] 2026-04-07 00:25:19.300292 | orchestrator | 2026-04-07 00:25:19.300307 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-07 00:25:19.300323 | orchestrator | Tuesday 07 April 2026 00:25:17 +0000 (0:00:01.002) 0:00:08.780 ********* 2026-04-07 00:25:19.300339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-07 00:25:19.300357 | orchestrator | 2026-04-07 00:25:19.300373 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-07 00:25:19.300388 | orchestrator | Tuesday 07 April 2026 00:25:17 +0000 (0:00:00.084) 0:00:08.865 ********* 2026-04-07 00:25:19.300404 | orchestrator | changed: [testbed-manager] 2026-04-07 00:25:19.300419 | orchestrator | 2026-04-07 00:25:19.300435 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:25:19.300451 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 00:25:19.300464 | orchestrator | 2026-04-07 00:25:19.300477 | orchestrator | 2026-04-07 00:25:19.300490 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:25:19.300503 | orchestrator | Tuesday 07 April 2026 00:25:19 +0000 (0:00:01.174) 0:00:10.039 ********* 2026-04-07 00:25:19.300515 | orchestrator | =============================================================================== 2026-04-07 00:25:19.300599 | orchestrator | Gathering Facts --------------------------------------------------------- 3.83s 2026-04-07 00:25:19.300616 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.18s 2026-04-07 00:25:19.300629 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-04-07 00:25:19.300642 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-04-07 00:25:19.300653 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-04-07 00:25:19.300666 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-04-07 00:25:19.300709 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-04-07 00:25:19.300723 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-07 00:25:19.300736 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-07 00:25:19.300748 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-07 00:25:19.300760 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-07 00:25:19.300773 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-07 00:25:19.300786 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-07 00:25:19.468989 | orchestrator | + osism apply sshconfig 2026-04-07 00:25:30.860290 | orchestrator | 2026-04-07 00:25:30 | INFO  | Prepare task for execution of sshconfig. 2026-04-07 00:25:30.937258 | orchestrator | 2026-04-07 00:25:30 | INFO  | Task 1b39cba9-aa37-439a-b6c6-860f20658311 (sshconfig) was prepared for execution. 2026-04-07 00:25:30.937356 | orchestrator | 2026-04-07 00:25:30 | INFO  | It takes a moment until task 1b39cba9-aa37-439a-b6c6-860f20658311 (sshconfig) has been started and output is visible here. 2026-04-07 00:25:41.817047 | orchestrator | 2026-04-07 00:25:41.817143 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-07 00:25:41.817154 | orchestrator | 2026-04-07 00:25:41.817160 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-07 00:25:41.817181 | orchestrator | Tuesday 07 April 2026 00:25:33 +0000 (0:00:00.144) 0:00:00.144 ********* 2026-04-07 00:25:41.817193 | orchestrator | ok: [testbed-manager] 2026-04-07 00:25:41.817204 | orchestrator | 2026-04-07 00:25:41.817213 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-07 00:25:41.817251 | orchestrator | Tuesday 07 April 2026 00:25:34 +0000 (0:00:00.857) 0:00:01.002 ********* 2026-04-07 00:25:41.817261 | orchestrator | changed: [testbed-manager] 2026-04-07 00:25:41.817271 | orchestrator | 2026-04-07 00:25:41.817281 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-07 00:25:41.817291 | orchestrator | Tuesday 07 April 2026 00:25:35 +0000 (0:00:00.492) 0:00:01.494 ********* 2026-04-07 00:25:41.817300 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-07 00:25:41.817310 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-07 00:25:41.817321 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-07 00:25:41.817330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-07 00:25:41.817341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-07 00:25:41.817351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-07 00:25:41.817360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-07 00:25:41.817370 | orchestrator | 2026-04-07 00:25:41.817379 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-07 00:25:41.817389 | orchestrator | Tuesday 07 April 2026 00:25:40 +0000 (0:00:05.631) 0:00:07.126 ********* 2026-04-07 00:25:41.817398 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:25:41.817407 | orchestrator | 2026-04-07 00:25:41.817417 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-07 00:25:41.817426 | orchestrator | Tuesday 07 April 2026 00:25:41 +0000 (0:00:00.114) 0:00:07.241 ********* 2026-04-07 00:25:41.817436 | orchestrator | changed: [testbed-manager] 2026-04-07 00:25:41.817446 | orchestrator | 2026-04-07 00:25:41.817456 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:25:41.817467 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:25:41.817477 | orchestrator | 2026-04-07 00:25:41.817487 | orchestrator | 2026-04-07 00:25:41.817497 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:25:41.817507 | orchestrator | Tuesday 07 April 2026 00:25:41 +0000 (0:00:00.544) 0:00:07.785 ********* 2026-04-07 00:25:41.817518 | orchestrator | =============================================================================== 2026-04-07 00:25:41.817527 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2026-04-07 00:25:41.817537 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.86s 2026-04-07 00:25:41.817546 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-04-07 00:25:41.817556 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-04-07 00:25:41.817565 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-07 00:25:41.996458 | orchestrator | + osism apply known-hosts 2026-04-07 00:25:53.274247 | orchestrator | 2026-04-07 00:25:53 | INFO  | Prepare task for execution of known-hosts. 2026-04-07 00:25:53.351379 | orchestrator | 2026-04-07 00:25:53 | INFO  | Task 7c00b967-90ed-41c1-9bc4-0e8dab302a83 (known-hosts) was prepared for execution. 2026-04-07 00:25:53.351509 | orchestrator | 2026-04-07 00:25:53 | INFO  | It takes a moment until task 7c00b967-90ed-41c1-9bc4-0e8dab302a83 (known-hosts) has been started and output is visible here. 2026-04-07 00:26:09.546999 | orchestrator | 2026-04-07 00:26:09.547177 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-07 00:26:09.547197 | orchestrator | 2026-04-07 00:26:09.547209 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-07 00:26:09.547222 | orchestrator | Tuesday 07 April 2026 00:25:57 +0000 (0:00:00.244) 0:00:00.244 ********* 2026-04-07 00:26:09.547233 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-07 00:26:09.547245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 00:26:09.547305 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 00:26:09.547318 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 00:26:09.547329 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-07 00:26:09.547340 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-07 00:26:09.547350 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-07 00:26:09.547361 | orchestrator | 2026-04-07 00:26:09.547372 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-07 00:26:09.547385 | orchestrator | Tuesday 07 April 2026 00:26:03 +0000 (0:00:06.546) 0:00:06.791 ********* 2026-04-07 00:26:09.547407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-07 00:26:09.547421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-07 00:26:09.547432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-07 00:26:09.547443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-07 00:26:09.547454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-07 00:26:09.547467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-07 00:26:09.547481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-07 00:26:09.547493 | orchestrator | 2026-04-07 00:26:09.547505 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.547518 | orchestrator | Tuesday 07 April 2026 00:26:03 +0000 (0:00:00.144) 0:00:06.935 ********* 2026-04-07 00:26:09.547531 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1iCKhNU4sZInpVOxMLUA2w3L1HtVrB9ZET5vWiozWc) 2026-04-07 00:26:09.547549 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYNF73lSyaPOW8i2yUOfZieYCU8JQJeap4n6NAw24vwqXkAMcacK1WzwW6052WyQgYRjRWp98TdaFntc6i4smR22XpmbyHWFnjXez/GXviWtj+rkWynDEaUsBywcGgnn+4eMOcYfssq3zCjUwwjKIv/pRKXSYSIObbXyDG2ar7D2JuYtGBmEhE5BxstxiBi+a5a3G9YBgMZeKin3o/MyZqa2UxzlwSiGYLnvuZKactWhg71EYlYEnYVXALsu2ZRdqcSbxRz/sz1QOJ4nbB0niqYR/cCqDnZ+qS8TSSv0xaJnWG+hzeoRsqcewAj2pTB/Z1lCZv8n2pa8xIDVht6lmkeKFbQ7qFF4/95lGlsl8siNM89/uZf207EFQq3XZA5RlBPFJ3rsmZsUePjlpr6mrt4FOo85o2taMCXgBjAMDKrPwKVKqji3YsX/kUVM8SCeo41YvU3JtoxkbD+lCYre7ml2Q4bPSgvsMCKJ82OajaF3hevmiR8FggxKK+FOz5osk=) 2026-04-07 00:26:09.547567 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZvUO9ewVt6Tzm/c5IuDSd2TlutSfvpIWLIz23QdpHMfgdWLciRX/vdQpDp0N2Q6jlqViQH6EB6w0dF0pzXlYA=) 2026-04-07 00:26:09.547582 | orchestrator | 2026-04-07 00:26:09.547595 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.547608 | orchestrator | Tuesday 07 April 2026 00:26:04 +0000 (0:00:01.200) 0:00:08.136 ********* 2026-04-07 00:26:09.547644 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEL5YipM9GdwTzoQcNb5WIDelpIr9RkTDf4bbHpHXUagkZ8/n7i/yST1pc9j5JUtL9Ne7SIOiMDHisjI9s5pEVJuocMl5v1u7zdlmM4L3+V2MpwqTiRh/PcKdTafCEak198+hfqVXXCmxh8i651hLTmrn5kLkE+VBM4X2iYuaiCUnyjgnsj+/GVn0SGUU1F5F1GWzuR/qkF9B0FqWoTvP1ITK4RVyGu0wjUv6aYiTFkc5FVEidu6I9oi2bpYh4s5+ROZUa3T3IPR8XawDBN/UNqGuaC1Vc+IpacM3iMBIPu1Wuia95jBSBSADwFqHf/gHG+x9pdcXZXOonDi72toOv24T4aYwA00Cde/rjqUTfmVxjN4bNlfS09CRfzmb54Gbx9FXzDk0DPrIeQw25Hys5a4rljYjxGaoiERp8nDEWvhY2o6fik19EqADNLljQ5NtJLALGqDDmZt9xiceOyhvke2mpwBj1QnPWHpMJTN60EZW8fusyqRH78zXu35gxMHs=) 2026-04-07 00:26:09.547667 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSTLhs8ZJj3sqfUVn12f6P+7zJj49Wa7HDa5TKR7llbgDgLUhMmpAHYo40cwUlFjV8/KVnKEHEtes/IwrZlr6g=) 2026-04-07 00:26:09.547680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1SQW2oKEnonHBKXG41zGT06yqMVP/tGsOSly7zwC5H) 2026-04-07 00:26:09.547693 | orchestrator | 2026-04-07 00:26:09.547705 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.547718 | orchestrator | Tuesday 07 April 2026 00:26:05 +0000 (0:00:00.928) 0:00:09.064 ********* 2026-04-07 00:26:09.547732 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx1NQ9OuLeNY1rJpedPYFD4465cc1bQGaa4WQC1KBp1qVMtm2EZIl9Nc9U6btQLLGT83RmFpLPgjNrQopAmrpEwlvqtFb0jouGf0WMX4YzHPxomELn8/tWdUldNhpUXwJ8kyhDEO7tIRcFKtoCDHnJ23iFoU7Py64a6o7TFND0Q4hB0MTywC96NJ+lMSeTwd1ycc18bbQNcxIExsU9SSHR3qCQ1pbB/na4k7XsNW6PxH4NY9AMYq0A75vfkqt4ly6/NJesFX7iJul/D8KwzwSB6abQpfaRi2dvY7Gmlgu0ndeKRqNmENSD5dyeL0cGmiYxaZWR7mFDt6WCG1rGiZOUbr5w7P48+FHUPtkgAX0cLuYGNLKiT5jiy8N3mGmNftpofrftZl9CQCvjpUJA2c19pLkl+BQK5coZdhPbl//U6D4mf27/HBhqLY5HEJ70evJAWDXqv567dF/jTv+bgJwmqdrIgPINbfCRbGca3e+eh8qVGSmbZQZhect2GB0eQHM=) 2026-04-07 00:26:09.547745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPb0GA6zFun+irlCBcFzFreZHiM/ftfnlxcpjG6dLbEUWR8szrVi4fY7O1HWJ1YhU+/i5OUmEREfKSGwG/1M7tM=) 2026-04-07 00:26:09.547824 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfzd5Bac2/RxdfmoOSgT+qCM5lUbYf44LJ8qoJqUz3+) 2026-04-07 00:26:09.547838 | orchestrator | 2026-04-07 00:26:09.547873 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.547884 | orchestrator | Tuesday 07 April 2026 00:26:06 +0000 (0:00:01.060) 0:00:10.125 ********* 2026-04-07 00:26:09.547895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCii/2Fyk6b4npk20Y1WCyAL54gmggzIpFeiOjBARB/TLlUKT3QllSH14FYihOqGiATdSVikd3fV1jNR9uEj1rDHwEqaXrtFc/5fnslsjs1kXDhGI4APu45145Qlxv5RSPKDi9kXlEM2BP1HzlUbtobnpNZuZBqusaiui2Rz0GX4f6rI1ZtpQ1M0y+3Z1S/Y0Xk8SnNX92zHMK6scgdoPb9IMxtWGBeHn/l0LbnMKK+ApWkIYS3G+9fwXphU0Q4Z1lQj6PW5y9cqPHdFswqXiZNw/LbPKS0NlZaCB8Nfw+sDs9KIjNg/GtOx+n49eE7IqP1rTTZ827UKRWpUtXCZ9vofGk7I7iyEpOkaM0/oorXgY4N1ItLrlsuoQEy252U9BC+yJyETksmv5HzDP0hLT2aYZVi+t2pn2HCzpD3mxCGsQEk4xnAO+KzBaJ0L8NsCJCCo8b/BPKOPjYEyeXx0cSX6R70RjXQsJjpbZQh1OtVASsGySkyV0V771wiuI2XsE8=) 2026-04-07 00:26:09.547906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXN2+Y4qLrgxJInpObYFh8UfqkP2o3ZVCwANTPByqHcZsQj/E3sSr/nKbr6lAxReGn2VehiGWJ/mlYcL2xvBbE=) 2026-04-07 00:26:09.547917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt9+hCbe2Lt2gookg8dOEjxzrSfIX9kZocmcpvJ+8Yq) 2026-04-07 00:26:09.547928 | orchestrator | 2026-04-07 00:26:09.547939 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.547950 | orchestrator | Tuesday 07 April 2026 00:26:08 +0000 (0:00:01.073) 0:00:11.199 ********* 2026-04-07 00:26:09.547961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXXfFu6oiZn9jPFq9VcZzuznRY2ZZVJRl737ZVHVcGE2v1BfYabnVK1nCNglYBH+N3XvFVf/y23geM/6D5mGUR092ARLtfbYAHSpdCH/cvKo4Y0pcoOA0KDJvAI1xeWGKK9feLnP6O1UePG+Wf6vl/aTmyuvs0OSC25j3Enn1WF/0uNT0IZTkjDaFachylXowcx93pimWg809leYzlpYQQ32puDV1Fc3pJEU1B+Y/tDy1xJkovnKTcOJSKSzFbXVNcKweL4MCvcsiDh1TuBtszFTRpHrOlYYPnPEJ75J4YkTa22rxuV82D8ctwpYWn0wiODkDHkeBOYpGxdajBtXWqudcyq668N5otbMWFyP7d58YcAX5yyAkbKGNlzEhO2G5HEmInBpNvNyom50kDs2tH5bpzo4yLUkyyZoj4SWkhmgA/L2N5YjV1lk3N/ZLFfbItJ857nhQWrxmDwJr8XMzkiXYIy50jys8ggH2OOi7tegeHGd0y012ioV/ZSft9gks=) 2026-04-07 00:26:09.547984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXq3YXoY4ZlEFYhcrCuuMmN0/N8io75nSYcWWIA2GXG+6Lcbl5E3CrkHfx9sDM76YE5BrnlKVvWutYFxy8X2ZE=) 2026-04-07 00:26:09.547995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJyYm+kNBiWb+50rIOdinkaL3lnK64dagwXE+SBe4z5) 2026-04-07 00:26:09.548006 | orchestrator | 2026-04-07 00:26:09.548017 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:09.548028 | orchestrator | Tuesday 07 April 2026 00:26:09 +0000 (0:00:01.107) 0:00:12.307 ********* 2026-04-07 00:26:09.548048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAfvvovy58mkkbn9349Zlwxd4IST7wIVZUyXKXNEPvndPiKSnqtC67U3einlQSnrlYQcUS7984kxgYzwASjnRyC6oWX8/OQ9eESNnTIOMThbsf2B6MB+sxmOnpkQ6vaUS9uMAbXxXKyBQP5ywYqcb7VSTGV+4i97BCAA1UZRTbVJ2qIOAHpTxA85VGJChvYon35swWpcJQ4yujrDiaqSLl84ro2BzQ7VKVmzOopxmr9ofOUSZueBSVYO716NnPl6t/CY1p4Mk0Mv4WEi3G5XHLLIh/nuzG5kbINMJD6yhYXdxD/goo9+SZSDKDlM7fwRlN5edUhFvM8V4qhIYgDs8lQgSjGFxhqSNhReRMDw7SisbNYXKpT4sD02M6F8J9KV4b3epAKIGyKHrx4fdBwC/B7zMVjS8ZxJlgFNPg8UQSu5Gyzjx96x8WXahVu7iN2aiO1MzidCK3L+gK8SjItDrz3fuIxm0912veHz0GsDIYZ8H+AQ71M2nvf8aAuJUEQas=) 2026-04-07 00:26:21.137691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJSDqrra+L8WIK0PdUsIGCp9opJNGWZJyuFpjMhO/nxsCFJAwNyDVGP9jD9SntStUvCnYGYyPRiKQF+cFa6CyHc=) 2026-04-07 00:26:21.137813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4kIoN6ufzQPaZXjY9OfpvCw2tii08/RAQiT5TxYaNH) 2026-04-07 00:26:21.137836 | orchestrator | 2026-04-07 00:26:21.137912 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:21.137933 | orchestrator | Tuesday 07 April 2026 00:26:10 +0000 (0:00:01.062) 0:00:13.369 ********* 2026-04-07 00:26:21.137953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDApLv3nUkp8Y2JabJ5/MuV++vMuEdYsC7jlmmz7qFQHhGtVrE7o5v7G2i5ejaq8yroTHlPI0KZW1wHlXDktDBgoPGP3iaMWUd69c8iyTijlFCqvhsebx8RUYDg/YUVeGC1h4cfWQIU3kF7mWTpq1xsL0/9EaqOHe27cYC9/NpCTFkGUSqZ7+tftc5posNXhFa7P8NiiDPJcxkKRtxBZd57e38fo3KNnIsuWduf83NJPPgckTpjyXf9/YRjw6gIQ9jvLom/KpOXXHPNK3LwJEdwaYpgHCKh862eaxcl1exKR23/bKinwDPkSRtp8eDOBU4TErPnZmiPFPtYe1349Pvksg5Cv8kv6AqQKktKEpMrxWAwelXfYcKLfnHlcTEm2lwRUBxaMGjHWZM9+GEH9pKYIkxCk6XlpWhy8Qj7DU1WlLBOSj5b1KC3NnZq4byeHlI0tl2vo78Jg1PK++zDNtCy4222biFHR/enC2+nHyxnOMCWEwutLmn64jqg0c25X4U=) 2026-04-07 00:26:21.137972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKydSTiKS3W6aRPUlEFAWhePW3KWv3rUh3WpjoyxV/iKZDSmoU01B0bQP5cjrmfXT9oC+Ll6KWgUbxyMADurO2w=) 2026-04-07 00:26:21.137989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAyz2yVL3jONHBGLKKgCILgXgC8QVeesZ91GMZy7vajx) 2026-04-07 00:26:21.138006 | orchestrator | 2026-04-07 00:26:21.138089 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-07 00:26:21.138108 | orchestrator | Tuesday 07 April 2026 00:26:11 +0000 (0:00:01.103) 0:00:14.473 ********* 2026-04-07 00:26:21.138127 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-07 00:26:21.138261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 00:26:21.138280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 00:26:21.138298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 00:26:21.138316 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-07 00:26:21.138353 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-07 00:26:21.138402 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-07 00:26:21.138420 | orchestrator | 2026-04-07 00:26:21.138438 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-07 00:26:21.138457 | orchestrator | Tuesday 07 April 2026 00:26:16 +0000 (0:00:05.332) 0:00:19.805 ********* 2026-04-07 00:26:21.138475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-07 00:26:21.138494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-07 00:26:21.138510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-07 00:26:21.138527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-07 00:26:21.138545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-07 00:26:21.138561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-07 00:26:21.138578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-07 00:26:21.138594 | orchestrator | 2026-04-07 00:26:21.138610 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:21.138627 | orchestrator | Tuesday 07 April 2026 00:26:16 +0000 (0:00:00.185) 0:00:19.991 ********* 2026-04-07 00:26:21.138644 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1iCKhNU4sZInpVOxMLUA2w3L1HtVrB9ZET5vWiozWc) 2026-04-07 00:26:21.138693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYNF73lSyaPOW8i2yUOfZieYCU8JQJeap4n6NAw24vwqXkAMcacK1WzwW6052WyQgYRjRWp98TdaFntc6i4smR22XpmbyHWFnjXez/GXviWtj+rkWynDEaUsBywcGgnn+4eMOcYfssq3zCjUwwjKIv/pRKXSYSIObbXyDG2ar7D2JuYtGBmEhE5BxstxiBi+a5a3G9YBgMZeKin3o/MyZqa2UxzlwSiGYLnvuZKactWhg71EYlYEnYVXALsu2ZRdqcSbxRz/sz1QOJ4nbB0niqYR/cCqDnZ+qS8TSSv0xaJnWG+hzeoRsqcewAj2pTB/Z1lCZv8n2pa8xIDVht6lmkeKFbQ7qFF4/95lGlsl8siNM89/uZf207EFQq3XZA5RlBPFJ3rsmZsUePjlpr6mrt4FOo85o2taMCXgBjAMDKrPwKVKqji3YsX/kUVM8SCeo41YvU3JtoxkbD+lCYre7ml2Q4bPSgvsMCKJ82OajaF3hevmiR8FggxKK+FOz5osk=) 2026-04-07 00:26:21.138712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZvUO9ewVt6Tzm/c5IuDSd2TlutSfvpIWLIz23QdpHMfgdWLciRX/vdQpDp0N2Q6jlqViQH6EB6w0dF0pzXlYA=) 2026-04-07 00:26:21.138729 | orchestrator | 2026-04-07 00:26:21.138746 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:21.138763 | orchestrator | Tuesday 07 April 2026 00:26:17 +0000 (0:00:01.079) 0:00:21.071 ********* 2026-04-07 00:26:21.138780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEL5YipM9GdwTzoQcNb5WIDelpIr9RkTDf4bbHpHXUagkZ8/n7i/yST1pc9j5JUtL9Ne7SIOiMDHisjI9s5pEVJuocMl5v1u7zdlmM4L3+V2MpwqTiRh/PcKdTafCEak198+hfqVXXCmxh8i651hLTmrn5kLkE+VBM4X2iYuaiCUnyjgnsj+/GVn0SGUU1F5F1GWzuR/qkF9B0FqWoTvP1ITK4RVyGu0wjUv6aYiTFkc5FVEidu6I9oi2bpYh4s5+ROZUa3T3IPR8XawDBN/UNqGuaC1Vc+IpacM3iMBIPu1Wuia95jBSBSADwFqHf/gHG+x9pdcXZXOonDi72toOv24T4aYwA00Cde/rjqUTfmVxjN4bNlfS09CRfzmb54Gbx9FXzDk0DPrIeQw25Hys5a4rljYjxGaoiERp8nDEWvhY2o6fik19EqADNLljQ5NtJLALGqDDmZt9xiceOyhvke2mpwBj1QnPWHpMJTN60EZW8fusyqRH78zXu35gxMHs=) 2026-04-07 00:26:21.138807 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSTLhs8ZJj3sqfUVn12f6P+7zJj49Wa7HDa5TKR7llbgDgLUhMmpAHYo40cwUlFjV8/KVnKEHEtes/IwrZlr6g=) 2026-04-07 00:26:21.138825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1SQW2oKEnonHBKXG41zGT06yqMVP/tGsOSly7zwC5H) 2026-04-07 00:26:21.138841 | orchestrator | 2026-04-07 00:26:21.138885 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:21.138901 | orchestrator | Tuesday 07 April 2026 00:26:18 +0000 (0:00:01.042) 0:00:22.113 ********* 2026-04-07 00:26:21.138918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfzd5Bac2/RxdfmoOSgT+qCM5lUbYf44LJ8qoJqUz3+) 2026-04-07 00:26:21.138934 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx1NQ9OuLeNY1rJpedPYFD4465cc1bQGaa4WQC1KBp1qVMtm2EZIl9Nc9U6btQLLGT83RmFpLPgjNrQopAmrpEwlvqtFb0jouGf0WMX4YzHPxomELn8/tWdUldNhpUXwJ8kyhDEO7tIRcFKtoCDHnJ23iFoU7Py64a6o7TFND0Q4hB0MTywC96NJ+lMSeTwd1ycc18bbQNcxIExsU9SSHR3qCQ1pbB/na4k7XsNW6PxH4NY9AMYq0A75vfkqt4ly6/NJesFX7iJul/D8KwzwSB6abQpfaRi2dvY7Gmlgu0ndeKRqNmENSD5dyeL0cGmiYxaZWR7mFDt6WCG1rGiZOUbr5w7P48+FHUPtkgAX0cLuYGNLKiT5jiy8N3mGmNftpofrftZl9CQCvjpUJA2c19pLkl+BQK5coZdhPbl//U6D4mf27/HBhqLY5HEJ70evJAWDXqv567dF/jTv+bgJwmqdrIgPINbfCRbGca3e+eh8qVGSmbZQZhect2GB0eQHM=) 2026-04-07 00:26:21.138950 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPb0GA6zFun+irlCBcFzFreZHiM/ftfnlxcpjG6dLbEUWR8szrVi4fY7O1HWJ1YhU+/i5OUmEREfKSGwG/1M7tM=) 2026-04-07 00:26:21.138966 | orchestrator | 2026-04-07 00:26:21.138981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:21.138997 | orchestrator | Tuesday 07 April 2026 00:26:20 +0000 (0:00:01.100) 0:00:23.213 ********* 2026-04-07 00:26:21.139013 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt9+hCbe2Lt2gookg8dOEjxzrSfIX9kZocmcpvJ+8Yq) 2026-04-07 00:26:21.139037 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCii/2Fyk6b4npk20Y1WCyAL54gmggzIpFeiOjBARB/TLlUKT3QllSH14FYihOqGiATdSVikd3fV1jNR9uEj1rDHwEqaXrtFc/5fnslsjs1kXDhGI4APu45145Qlxv5RSPKDi9kXlEM2BP1HzlUbtobnpNZuZBqusaiui2Rz0GX4f6rI1ZtpQ1M0y+3Z1S/Y0Xk8SnNX92zHMK6scgdoPb9IMxtWGBeHn/l0LbnMKK+ApWkIYS3G+9fwXphU0Q4Z1lQj6PW5y9cqPHdFswqXiZNw/LbPKS0NlZaCB8Nfw+sDs9KIjNg/GtOx+n49eE7IqP1rTTZ827UKRWpUtXCZ9vofGk7I7iyEpOkaM0/oorXgY4N1ItLrlsuoQEy252U9BC+yJyETksmv5HzDP0hLT2aYZVi+t2pn2HCzpD3mxCGsQEk4xnAO+KzBaJ0L8NsCJCCo8b/BPKOPjYEyeXx0cSX6R70RjXQsJjpbZQh1OtVASsGySkyV0V771wiuI2XsE8=) 2026-04-07 00:26:21.139068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXN2+Y4qLrgxJInpObYFh8UfqkP2o3ZVCwANTPByqHcZsQj/E3sSr/nKbr6lAxReGn2VehiGWJ/mlYcL2xvBbE=) 2026-04-07 00:26:25.243578 | orchestrator | 2026-04-07 00:26:25.243647 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:25.243654 | orchestrator | Tuesday 07 April 2026 00:26:21 +0000 (0:00:01.085) 0:00:24.298 ********* 2026-04-07 00:26:25.243660 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXq3YXoY4ZlEFYhcrCuuMmN0/N8io75nSYcWWIA2GXG+6Lcbl5E3CrkHfx9sDM76YE5BrnlKVvWutYFxy8X2ZE=) 2026-04-07 00:26:25.243680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXXfFu6oiZn9jPFq9VcZzuznRY2ZZVJRl737ZVHVcGE2v1BfYabnVK1nCNglYBH+N3XvFVf/y23geM/6D5mGUR092ARLtfbYAHSpdCH/cvKo4Y0pcoOA0KDJvAI1xeWGKK9feLnP6O1UePG+Wf6vl/aTmyuvs0OSC25j3Enn1WF/0uNT0IZTkjDaFachylXowcx93pimWg809leYzlpYQQ32puDV1Fc3pJEU1B+Y/tDy1xJkovnKTcOJSKSzFbXVNcKweL4MCvcsiDh1TuBtszFTRpHrOlYYPnPEJ75J4YkTa22rxuV82D8ctwpYWn0wiODkDHkeBOYpGxdajBtXWqudcyq668N5otbMWFyP7d58YcAX5yyAkbKGNlzEhO2G5HEmInBpNvNyom50kDs2tH5bpzo4yLUkyyZoj4SWkhmgA/L2N5YjV1lk3N/ZLFfbItJ857nhQWrxmDwJr8XMzkiXYIy50jys8ggH2OOi7tegeHGd0y012ioV/ZSft9gks=) 2026-04-07 00:26:25.243701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJyYm+kNBiWb+50rIOdinkaL3lnK64dagwXE+SBe4z5) 2026-04-07 00:26:25.243707 | orchestrator | 2026-04-07 00:26:25.243711 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:25.243715 | orchestrator | Tuesday 07 April 2026 00:26:22 +0000 (0:00:01.029) 0:00:25.328 ********* 2026-04-07 00:26:25.243719 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4kIoN6ufzQPaZXjY9OfpvCw2tii08/RAQiT5TxYaNH) 2026-04-07 00:26:25.243724 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAfvvovy58mkkbn9349Zlwxd4IST7wIVZUyXKXNEPvndPiKSnqtC67U3einlQSnrlYQcUS7984kxgYzwASjnRyC6oWX8/OQ9eESNnTIOMThbsf2B6MB+sxmOnpkQ6vaUS9uMAbXxXKyBQP5ywYqcb7VSTGV+4i97BCAA1UZRTbVJ2qIOAHpTxA85VGJChvYon35swWpcJQ4yujrDiaqSLl84ro2BzQ7VKVmzOopxmr9ofOUSZueBSVYO716NnPl6t/CY1p4Mk0Mv4WEi3G5XHLLIh/nuzG5kbINMJD6yhYXdxD/goo9+SZSDKDlM7fwRlN5edUhFvM8V4qhIYgDs8lQgSjGFxhqSNhReRMDw7SisbNYXKpT4sD02M6F8J9KV4b3epAKIGyKHrx4fdBwC/B7zMVjS8ZxJlgFNPg8UQSu5Gyzjx96x8WXahVu7iN2aiO1MzidCK3L+gK8SjItDrz3fuIxm0912veHz0GsDIYZ8H+AQ71M2nvf8aAuJUEQas=) 2026-04-07 00:26:25.243728 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJSDqrra+L8WIK0PdUsIGCp9opJNGWZJyuFpjMhO/nxsCFJAwNyDVGP9jD9SntStUvCnYGYyPRiKQF+cFa6CyHc=) 2026-04-07 00:26:25.243733 | orchestrator | 2026-04-07 00:26:25.243737 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 00:26:25.243741 | orchestrator | Tuesday 07 April 2026 00:26:23 +0000 (0:00:01.069) 0:00:26.397 ********* 2026-04-07 00:26:25.243745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAyz2yVL3jONHBGLKKgCILgXgC8QVeesZ91GMZy7vajx) 2026-04-07 00:26:25.243749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDApLv3nUkp8Y2JabJ5/MuV++vMuEdYsC7jlmmz7qFQHhGtVrE7o5v7G2i5ejaq8yroTHlPI0KZW1wHlXDktDBgoPGP3iaMWUd69c8iyTijlFCqvhsebx8RUYDg/YUVeGC1h4cfWQIU3kF7mWTpq1xsL0/9EaqOHe27cYC9/NpCTFkGUSqZ7+tftc5posNXhFa7P8NiiDPJcxkKRtxBZd57e38fo3KNnIsuWduf83NJPPgckTpjyXf9/YRjw6gIQ9jvLom/KpOXXHPNK3LwJEdwaYpgHCKh862eaxcl1exKR23/bKinwDPkSRtp8eDOBU4TErPnZmiPFPtYe1349Pvksg5Cv8kv6AqQKktKEpMrxWAwelXfYcKLfnHlcTEm2lwRUBxaMGjHWZM9+GEH9pKYIkxCk6XlpWhy8Qj7DU1WlLBOSj5b1KC3NnZq4byeHlI0tl2vo78Jg1PK++zDNtCy4222biFHR/enC2+nHyxnOMCWEwutLmn64jqg0c25X4U=) 2026-04-07 00:26:25.243753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKydSTiKS3W6aRPUlEFAWhePW3KWv3rUh3WpjoyxV/iKZDSmoU01B0bQP5cjrmfXT9oC+Ll6KWgUbxyMADurO2w=) 2026-04-07 00:26:25.243758 | orchestrator | 2026-04-07 00:26:25.243762 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-07 00:26:25.243766 | orchestrator | Tuesday 07 April 2026 00:26:24 +0000 (0:00:01.054) 0:00:27.452 ********* 2026-04-07 00:26:25.243770 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-07 00:26:25.243775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 00:26:25.243779 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 00:26:25.243784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 00:26:25.243791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-07 00:26:25.243798 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-07 00:26:25.243804 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-07 00:26:25.243811 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:26:25.243818 | orchestrator | 2026-04-07 00:26:25.243839 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-07 00:26:25.243846 | orchestrator | Tuesday 07 April 2026 00:26:24 +0000 (0:00:00.171) 0:00:27.624 ********* 2026-04-07 00:26:25.243924 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:26:25.243929 | orchestrator | 2026-04-07 00:26:25.243933 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-07 00:26:25.243938 | orchestrator | Tuesday 07 April 2026 00:26:24 +0000 (0:00:00.040) 0:00:27.665 ********* 2026-04-07 00:26:25.243942 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:26:25.243946 | orchestrator | 2026-04-07 00:26:25.243950 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-07 00:26:25.243954 | orchestrator | Tuesday 07 April 2026 00:26:24 +0000 (0:00:00.040) 0:00:27.705 ********* 2026-04-07 00:26:25.243958 | orchestrator | changed: [testbed-manager] 2026-04-07 00:26:25.243962 | orchestrator | 2026-04-07 00:26:25.243967 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:26:25.243971 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 00:26:25.243976 | orchestrator | 2026-04-07 00:26:25.243981 | orchestrator | 2026-04-07 00:26:25.243985 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:26:25.243989 | orchestrator | Tuesday 07 April 2026 00:26:25 +0000 (0:00:00.473) 0:00:28.178 ********* 2026-04-07 00:26:25.243993 | orchestrator | =============================================================================== 2026-04-07 00:26:25.243997 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.55s 2026-04-07 00:26:25.244001 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.33s 2026-04-07 00:26:25.244013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-07 00:26:25.244018 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-07 00:26:25.244022 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-07 00:26:25.244026 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-07 00:26:25.244036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-07 00:26:25.244040 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-07 00:26:25.244044 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-07 00:26:25.244048 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-07 00:26:25.244052 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-07 00:26:25.244056 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-07 00:26:25.244061 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-07 00:26:25.244070 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-07 00:26:25.244075 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-07 00:26:25.244080 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-07 00:26:25.244087 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.47s 2026-04-07 00:26:25.244093 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-04-07 00:26:25.244100 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-04-07 00:26:25.244106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2026-04-07 00:26:25.428075 | orchestrator | + osism apply squid 2026-04-07 00:26:36.799313 | orchestrator | 2026-04-07 00:26:36 | INFO  | Prepare task for execution of squid. 2026-04-07 00:26:36.881698 | orchestrator | 2026-04-07 00:26:36 | INFO  | Task 669dcd07-d1aa-44f3-9eb6-f28085e6922d (squid) was prepared for execution. 2026-04-07 00:26:36.881778 | orchestrator | 2026-04-07 00:26:36 | INFO  | It takes a moment until task 669dcd07-d1aa-44f3-9eb6-f28085e6922d (squid) has been started and output is visible here. 2026-04-07 00:28:31.048491 | orchestrator | 2026-04-07 00:28:31.048639 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-07 00:28:31.048662 | orchestrator | 2026-04-07 00:28:31.048675 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-07 00:28:31.048686 | orchestrator | Tuesday 07 April 2026 00:26:40 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-04-07 00:28:31.048698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:28:31.048710 | orchestrator | 2026-04-07 00:28:31.048721 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-07 00:28:31.048732 | orchestrator | Tuesday 07 April 2026 00:26:40 +0000 (0:00:00.075) 0:00:00.264 ********* 2026-04-07 00:28:31.048743 | orchestrator | ok: [testbed-manager] 2026-04-07 00:28:31.048755 | orchestrator | 2026-04-07 00:28:31.048766 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-07 00:28:31.048777 | orchestrator | Tuesday 07 April 2026 00:26:42 +0000 (0:00:02.433) 0:00:02.697 ********* 2026-04-07 00:28:31.048788 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-07 00:28:31.048799 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-07 00:28:31.048810 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-07 00:28:31.048821 | orchestrator | 2026-04-07 00:28:31.048832 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-07 00:28:31.048843 | orchestrator | Tuesday 07 April 2026 00:26:43 +0000 (0:00:01.260) 0:00:03.958 ********* 2026-04-07 00:28:31.048853 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-07 00:28:31.048865 | orchestrator | 2026-04-07 00:28:31.048876 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-07 00:28:31.048887 | orchestrator | Tuesday 07 April 2026 00:26:44 +0000 (0:00:01.065) 0:00:05.024 ********* 2026-04-07 00:28:31.048897 | orchestrator | ok: [testbed-manager] 2026-04-07 00:28:31.048935 | orchestrator | 2026-04-07 00:28:31.048947 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-07 00:28:31.048994 | orchestrator | Tuesday 07 April 2026 00:26:45 +0000 (0:00:00.342) 0:00:05.366 ********* 2026-04-07 00:28:31.049020 | orchestrator | changed: [testbed-manager] 2026-04-07 00:28:31.049041 | orchestrator | 2026-04-07 00:28:31.049092 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-07 00:28:31.049111 | orchestrator | Tuesday 07 April 2026 00:26:46 +0000 (0:00:00.928) 0:00:06.294 ********* 2026-04-07 00:28:31.049124 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-07 00:28:31.049138 | orchestrator | ok: [testbed-manager] 2026-04-07 00:28:31.049151 | orchestrator | 2026-04-07 00:28:31.049164 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-07 00:28:31.049176 | orchestrator | Tuesday 07 April 2026 00:27:17 +0000 (0:00:31.658) 0:00:37.953 ********* 2026-04-07 00:28:31.049189 | orchestrator | changed: [testbed-manager] 2026-04-07 00:28:31.049202 | orchestrator | 2026-04-07 00:28:31.049216 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-07 00:28:31.049229 | orchestrator | Tuesday 07 April 2026 00:27:30 +0000 (0:00:12.247) 0:00:50.200 ********* 2026-04-07 00:28:31.049240 | orchestrator | Pausing for 60 seconds 2026-04-07 00:28:31.049252 | orchestrator | changed: [testbed-manager] 2026-04-07 00:28:31.049263 | orchestrator | 2026-04-07 00:28:31.049274 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-07 00:28:31.049285 | orchestrator | Tuesday 07 April 2026 00:28:30 +0000 (0:01:00.091) 0:01:50.291 ********* 2026-04-07 00:28:31.049295 | orchestrator | ok: [testbed-manager] 2026-04-07 00:28:31.049306 | orchestrator | 2026-04-07 00:28:31.049317 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-07 00:28:31.049353 | orchestrator | Tuesday 07 April 2026 00:28:30 +0000 (0:00:00.064) 0:01:50.356 ********* 2026-04-07 00:28:31.049365 | orchestrator | changed: [testbed-manager] 2026-04-07 00:28:31.049381 | orchestrator | 2026-04-07 00:28:31.049399 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:28:31.049418 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:28:31.049436 | orchestrator | 2026-04-07 00:28:31.049456 | orchestrator | 2026-04-07 00:28:31.049468 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:28:31.049479 | orchestrator | Tuesday 07 April 2026 00:28:30 +0000 (0:00:00.564) 0:01:50.920 ********* 2026-04-07 00:28:31.049489 | orchestrator | =============================================================================== 2026-04-07 00:28:31.049500 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-04-07 00:28:31.049511 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.66s 2026-04-07 00:28:31.049522 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.25s 2026-04-07 00:28:31.049532 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.43s 2026-04-07 00:28:31.049543 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2026-04-07 00:28:31.049553 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-04-07 00:28:31.049564 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-04-07 00:28:31.049575 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2026-04-07 00:28:31.049585 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-04-07 00:28:31.049596 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-07 00:28:31.049607 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-07 00:28:31.213651 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-07 00:28:31.213728 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-07 00:28:31.220132 | orchestrator | + set -e 2026-04-07 00:28:31.220200 | orchestrator | + NAMESPACE=kolla 2026-04-07 00:28:31.220208 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-07 00:28:31.223428 | orchestrator | ++ semver latest 9.0.0 2026-04-07 00:28:31.266001 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-07 00:28:31.266097 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-07 00:28:31.267126 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-07 00:28:42.655840 | orchestrator | 2026-04-07 00:28:42 | INFO  | Prepare task for execution of operator. 2026-04-07 00:28:42.729560 | orchestrator | 2026-04-07 00:28:42 | INFO  | Task 96395cc2-1f7d-44cb-8f79-aeeee540b55a (operator) was prepared for execution. 2026-04-07 00:28:42.729640 | orchestrator | 2026-04-07 00:28:42 | INFO  | It takes a moment until task 96395cc2-1f7d-44cb-8f79-aeeee540b55a (operator) has been started and output is visible here. 2026-04-07 00:28:58.326396 | orchestrator | 2026-04-07 00:28:58.367629 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-07 00:28:58.367714 | orchestrator | 2026-04-07 00:28:58.367728 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 00:28:58.367740 | orchestrator | Tuesday 07 April 2026 00:28:45 +0000 (0:00:00.182) 0:00:00.182 ********* 2026-04-07 00:28:58.367753 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:28:58.367764 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:28:58.367775 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:28:58.367786 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:28:58.367797 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:28:58.367812 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:28:58.367823 | orchestrator | 2026-04-07 00:28:58.367835 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-07 00:28:58.367875 | orchestrator | Tuesday 07 April 2026 00:28:49 +0000 (0:00:03.299) 0:00:03.481 ********* 2026-04-07 00:28:58.367886 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:28:58.367897 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:28:58.367908 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:28:58.367918 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:28:58.367971 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:28:58.367983 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:28:58.367994 | orchestrator | 2026-04-07 00:28:58.368005 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-07 00:28:58.368016 | orchestrator | 2026-04-07 00:28:58.368026 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-07 00:28:58.368037 | orchestrator | Tuesday 07 April 2026 00:28:50 +0000 (0:00:00.923) 0:00:04.405 ********* 2026-04-07 00:28:58.368048 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:28:58.368059 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:28:58.368070 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:28:58.368080 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:28:58.368091 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:28:58.368101 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:28:58.368112 | orchestrator | 2026-04-07 00:28:58.368123 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-07 00:28:58.368133 | orchestrator | Tuesday 07 April 2026 00:28:50 +0000 (0:00:00.167) 0:00:04.573 ********* 2026-04-07 00:28:58.368144 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:28:58.368155 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:28:58.368165 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:28:58.368176 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:28:58.368204 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:28:58.368215 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:28:58.368226 | orchestrator | 2026-04-07 00:28:58.368237 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-07 00:28:58.368248 | orchestrator | Tuesday 07 April 2026 00:28:50 +0000 (0:00:00.153) 0:00:04.726 ********* 2026-04-07 00:28:58.368259 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:58.368270 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:58.368281 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:58.368291 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:58.368302 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:58.368313 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:58.368324 | orchestrator | 2026-04-07 00:28:58.368335 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-07 00:28:58.368346 | orchestrator | Tuesday 07 April 2026 00:28:51 +0000 (0:00:00.701) 0:00:05.428 ********* 2026-04-07 00:28:58.368356 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:58.368367 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:58.368378 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:58.368388 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:58.368399 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:58.368410 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:58.368420 | orchestrator | 2026-04-07 00:28:58.368431 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-07 00:28:58.368442 | orchestrator | Tuesday 07 April 2026 00:28:51 +0000 (0:00:00.913) 0:00:06.342 ********* 2026-04-07 00:28:58.368453 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-07 00:28:58.368464 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-07 00:28:58.368475 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-07 00:28:58.368485 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-07 00:28:58.368496 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-07 00:28:58.368507 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-07 00:28:58.368517 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-07 00:28:58.368528 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-07 00:28:58.368547 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-07 00:28:58.368558 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-07 00:28:58.368569 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-07 00:28:58.368579 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-07 00:28:58.368590 | orchestrator | 2026-04-07 00:28:58.368601 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-07 00:28:58.368611 | orchestrator | Tuesday 07 April 2026 00:28:53 +0000 (0:00:01.285) 0:00:07.627 ********* 2026-04-07 00:28:58.368622 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:58.368633 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:58.368644 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:58.368654 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:58.368665 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:58.368675 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:58.368686 | orchestrator | 2026-04-07 00:28:58.368697 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-07 00:28:58.368708 | orchestrator | Tuesday 07 April 2026 00:28:54 +0000 (0:00:01.462) 0:00:09.090 ********* 2026-04-07 00:28:58.368719 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368730 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368741 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368752 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368762 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368803 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 00:28:58.368814 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368825 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368836 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368847 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368858 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368868 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-07 00:28:58.368879 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.368890 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-07 00:28:58.368901 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-07 00:28:58.368917 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-07 00:28:58.368952 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.368963 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.368974 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.368985 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.368995 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-07 00:28:58.369006 | orchestrator | 2026-04-07 00:28:58.369017 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-07 00:28:58.369029 | orchestrator | Tuesday 07 April 2026 00:28:56 +0000 (0:00:01.480) 0:00:10.571 ********* 2026-04-07 00:28:58.369039 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:58.369050 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:58.369061 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:58.369071 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:58.369082 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:58.369092 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:58.369103 | orchestrator | 2026-04-07 00:28:58.369114 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-07 00:28:58.369132 | orchestrator | Tuesday 07 April 2026 00:28:56 +0000 (0:00:00.166) 0:00:10.738 ********* 2026-04-07 00:28:58.369143 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:58.369154 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:58.369164 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:58.369175 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:58.369186 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:58.369196 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:58.369207 | orchestrator | 2026-04-07 00:28:58.369218 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-07 00:28:58.369229 | orchestrator | Tuesday 07 April 2026 00:28:56 +0000 (0:00:00.160) 0:00:10.898 ********* 2026-04-07 00:28:58.369239 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:58.369250 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:58.369261 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:58.369271 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:58.369282 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:58.369293 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:58.369303 | orchestrator | 2026-04-07 00:28:58.369314 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-07 00:28:58.369325 | orchestrator | Tuesday 07 April 2026 00:28:57 +0000 (0:00:00.571) 0:00:11.469 ********* 2026-04-07 00:28:58.369336 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:58.369347 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:58.369357 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:58.369368 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:58.369379 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:58.369389 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:58.369400 | orchestrator | 2026-04-07 00:28:58.369411 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-07 00:28:58.369422 | orchestrator | Tuesday 07 April 2026 00:28:57 +0000 (0:00:00.161) 0:00:11.631 ********* 2026-04-07 00:28:58.369432 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:28:58.369443 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:58.369454 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 00:28:58.369465 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 00:28:58.369476 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:58.369486 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:58.369497 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 00:28:58.369508 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-07 00:28:58.369519 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:58.369529 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:58.369540 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-07 00:28:58.369550 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:58.369561 | orchestrator | 2026-04-07 00:28:58.369572 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-07 00:28:58.369583 | orchestrator | Tuesday 07 April 2026 00:28:58 +0000 (0:00:00.807) 0:00:12.439 ********* 2026-04-07 00:28:58.369593 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:58.369604 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:58.369615 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:58.369625 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:58.369636 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:58.369646 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:58.369657 | orchestrator | 2026-04-07 00:28:58.369668 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-07 00:28:58.369679 | orchestrator | Tuesday 07 April 2026 00:28:58 +0000 (0:00:00.150) 0:00:12.589 ********* 2026-04-07 00:28:58.369689 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:58.369700 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:58.369711 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:58.369738 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:58.369775 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:59.557583 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:59.557663 | orchestrator | 2026-04-07 00:28:59.557673 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-07 00:28:59.557681 | orchestrator | Tuesday 07 April 2026 00:28:58 +0000 (0:00:00.126) 0:00:12.715 ********* 2026-04-07 00:28:59.557687 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:59.557693 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:59.557699 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:59.557705 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:59.557711 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:59.557717 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:59.557723 | orchestrator | 2026-04-07 00:28:59.557729 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-07 00:28:59.557747 | orchestrator | Tuesday 07 April 2026 00:28:58 +0000 (0:00:00.138) 0:00:12.854 ********* 2026-04-07 00:28:59.557753 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:28:59.557759 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:28:59.557773 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:28:59.557780 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:28:59.557785 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:28:59.557791 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:28:59.557797 | orchestrator | 2026-04-07 00:28:59.557803 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-07 00:28:59.557809 | orchestrator | Tuesday 07 April 2026 00:28:59 +0000 (0:00:00.690) 0:00:13.544 ********* 2026-04-07 00:28:59.557814 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:28:59.557820 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:28:59.557826 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:28:59.557831 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:28:59.557837 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:28:59.557843 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:28:59.557848 | orchestrator | 2026-04-07 00:28:59.557854 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:28:59.557861 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557885 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557891 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557897 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557903 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557909 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 00:28:59.557914 | orchestrator | 2026-04-07 00:28:59.557920 | orchestrator | 2026-04-07 00:28:59.558133 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:28:59.558147 | orchestrator | Tuesday 07 April 2026 00:28:59 +0000 (0:00:00.211) 0:00:13.756 ********* 2026-04-07 00:28:59.558157 | orchestrator | =============================================================================== 2026-04-07 00:28:59.558167 | orchestrator | Gathering Facts --------------------------------------------------------- 3.30s 2026-04-07 00:28:59.558179 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.48s 2026-04-07 00:28:59.558190 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.46s 2026-04-07 00:28:59.558222 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-04-07 00:28:59.558229 | orchestrator | Do not require tty for all users ---------------------------------------- 0.92s 2026-04-07 00:28:59.558236 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2026-04-07 00:28:59.558243 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2026-04-07 00:28:59.558250 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.70s 2026-04-07 00:28:59.558257 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-04-07 00:28:59.558264 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-04-07 00:28:59.558271 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-07 00:28:59.558278 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-07 00:28:59.558285 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-04-07 00:28:59.558291 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-07 00:28:59.558297 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-07 00:28:59.558302 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-07 00:28:59.558308 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-07 00:28:59.558314 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-07 00:28:59.558319 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-04-07 00:28:59.721602 | orchestrator | + osism apply --environment custom facts 2026-04-07 00:29:00.943358 | orchestrator | 2026-04-07 00:29:00 | INFO  | Trying to run play facts in environment custom 2026-04-07 00:29:11.007703 | orchestrator | 2026-04-07 00:29:11 | INFO  | Prepare task for execution of facts. 2026-04-07 00:29:11.077435 | orchestrator | 2026-04-07 00:29:11 | INFO  | Task a7a8b59f-d407-4155-ac56-e59b9e218b8c (facts) was prepared for execution. 2026-04-07 00:29:11.077536 | orchestrator | 2026-04-07 00:29:11 | INFO  | It takes a moment until task a7a8b59f-d407-4155-ac56-e59b9e218b8c (facts) has been started and output is visible here. 2026-04-07 00:29:56.075626 | orchestrator | 2026-04-07 00:29:56.075736 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-07 00:29:56.075754 | orchestrator | 2026-04-07 00:29:56.075766 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 00:29:56.075794 | orchestrator | Tuesday 07 April 2026 00:29:14 +0000 (0:00:00.108) 0:00:00.108 ********* 2026-04-07 00:29:56.075806 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:29:56.075819 | orchestrator | ok: [testbed-manager] 2026-04-07 00:29:56.075839 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:29:56.075857 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.075875 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.075892 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.075910 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:29:56.075928 | orchestrator | 2026-04-07 00:29:56.075946 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-07 00:29:56.076226 | orchestrator | Tuesday 07 April 2026 00:29:15 +0000 (0:00:01.410) 0:00:01.519 ********* 2026-04-07 00:29:56.076308 | orchestrator | ok: [testbed-manager] 2026-04-07 00:29:56.076405 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:29:56.076443 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:29:56.076462 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.076482 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.076502 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.076521 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:29:56.076573 | orchestrator | 2026-04-07 00:29:56.076595 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-07 00:29:56.076634 | orchestrator | 2026-04-07 00:29:56.076756 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 00:29:56.076830 | orchestrator | Tuesday 07 April 2026 00:29:16 +0000 (0:00:01.223) 0:00:02.743 ********* 2026-04-07 00:29:56.076849 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.076865 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.076883 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.076900 | orchestrator | 2026-04-07 00:29:56.076917 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 00:29:56.076936 | orchestrator | Tuesday 07 April 2026 00:29:16 +0000 (0:00:00.093) 0:00:02.836 ********* 2026-04-07 00:29:56.076981 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.076999 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.077016 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.077033 | orchestrator | 2026-04-07 00:29:56.077051 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 00:29:56.077069 | orchestrator | Tuesday 07 April 2026 00:29:16 +0000 (0:00:00.168) 0:00:03.005 ********* 2026-04-07 00:29:56.077086 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.077103 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.077120 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.077138 | orchestrator | 2026-04-07 00:29:56.077155 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 00:29:56.077173 | orchestrator | Tuesday 07 April 2026 00:29:17 +0000 (0:00:00.164) 0:00:03.170 ********* 2026-04-07 00:29:56.077192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:29:56.077211 | orchestrator | 2026-04-07 00:29:56.077228 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 00:29:56.077245 | orchestrator | Tuesday 07 April 2026 00:29:17 +0000 (0:00:00.097) 0:00:03.267 ********* 2026-04-07 00:29:56.077263 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.077280 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.077296 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.077313 | orchestrator | 2026-04-07 00:29:56.077331 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 00:29:56.077348 | orchestrator | Tuesday 07 April 2026 00:29:17 +0000 (0:00:00.426) 0:00:03.694 ********* 2026-04-07 00:29:56.077365 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:29:56.077382 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:29:56.077399 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:29:56.077434 | orchestrator | 2026-04-07 00:29:56.077452 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 00:29:56.077469 | orchestrator | Tuesday 07 April 2026 00:29:17 +0000 (0:00:00.116) 0:00:03.810 ********* 2026-04-07 00:29:56.077486 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.077503 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.077521 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.077538 | orchestrator | 2026-04-07 00:29:56.077555 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 00:29:56.077572 | orchestrator | Tuesday 07 April 2026 00:29:18 +0000 (0:00:01.142) 0:00:04.953 ********* 2026-04-07 00:29:56.077590 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.077608 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.077625 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.077643 | orchestrator | 2026-04-07 00:29:56.077660 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 00:29:56.077677 | orchestrator | Tuesday 07 April 2026 00:29:19 +0000 (0:00:00.495) 0:00:05.448 ********* 2026-04-07 00:29:56.077695 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.077713 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.077730 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.077767 | orchestrator | 2026-04-07 00:29:56.077787 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 00:29:56.077807 | orchestrator | Tuesday 07 April 2026 00:29:20 +0000 (0:00:01.185) 0:00:06.633 ********* 2026-04-07 00:29:56.077827 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.077847 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.077867 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.077886 | orchestrator | 2026-04-07 00:29:56.077906 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-07 00:29:56.077926 | orchestrator | Tuesday 07 April 2026 00:29:38 +0000 (0:00:17.848) 0:00:24.482 ********* 2026-04-07 00:29:56.077946 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:29:56.077990 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:29:56.078011 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:29:56.078115 | orchestrator | 2026-04-07 00:29:56.078133 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-07 00:29:56.078491 | orchestrator | Tuesday 07 April 2026 00:29:38 +0000 (0:00:00.100) 0:00:24.583 ********* 2026-04-07 00:29:56.078596 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:29:56.078617 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:29:56.078637 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:29:56.078657 | orchestrator | 2026-04-07 00:29:56.078678 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 00:29:56.078699 | orchestrator | Tuesday 07 April 2026 00:29:46 +0000 (0:00:08.097) 0:00:32.680 ********* 2026-04-07 00:29:56.078719 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.078739 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.078759 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.078779 | orchestrator | 2026-04-07 00:29:56.078798 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-07 00:29:56.078817 | orchestrator | Tuesday 07 April 2026 00:29:47 +0000 (0:00:00.472) 0:00:33.153 ********* 2026-04-07 00:29:56.078835 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-07 00:29:56.078854 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-07 00:29:56.078873 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-07 00:29:56.078892 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-07 00:29:56.078911 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-07 00:29:56.079017 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-07 00:29:56.079138 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-07 00:29:56.079176 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-07 00:29:56.079196 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-07 00:29:56.079216 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-07 00:29:56.079235 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-07 00:29:56.079255 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-07 00:29:56.079275 | orchestrator | 2026-04-07 00:29:56.079295 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 00:29:56.079315 | orchestrator | Tuesday 07 April 2026 00:29:50 +0000 (0:00:03.743) 0:00:36.896 ********* 2026-04-07 00:29:56.079335 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.079355 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.079375 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.079395 | orchestrator | 2026-04-07 00:29:56.079416 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 00:29:56.079436 | orchestrator | 2026-04-07 00:29:56.079456 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:29:56.079476 | orchestrator | Tuesday 07 April 2026 00:29:52 +0000 (0:00:01.288) 0:00:38.185 ********* 2026-04-07 00:29:56.079514 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:29:56.079534 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:29:56.079554 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:29:56.079574 | orchestrator | ok: [testbed-manager] 2026-04-07 00:29:56.079595 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:29:56.079675 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:29:56.079697 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:29:56.079717 | orchestrator | 2026-04-07 00:29:56.079737 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:29:56.079759 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:29:56.079780 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:29:56.079802 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:29:56.079822 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:29:56.079843 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:29:56.079864 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:29:56.079883 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:29:56.079903 | orchestrator | 2026-04-07 00:29:56.079924 | orchestrator | 2026-04-07 00:29:56.079944 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:29:56.079991 | orchestrator | Tuesday 07 April 2026 00:29:56 +0000 (0:00:03.888) 0:00:42.074 ********* 2026-04-07 00:29:56.080010 | orchestrator | =============================================================================== 2026-04-07 00:29:56.080029 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.85s 2026-04-07 00:29:56.080047 | orchestrator | Install required packages (Debian) -------------------------------------- 8.10s 2026-04-07 00:29:56.080067 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2026-04-07 00:29:56.080087 | orchestrator | Copy fact files --------------------------------------------------------- 3.74s 2026-04-07 00:29:56.080106 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-04-07 00:29:56.080124 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2026-04-07 00:29:56.080158 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-04-07 00:29:56.253089 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.19s 2026-04-07 00:29:56.253205 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.14s 2026-04-07 00:29:56.253220 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-04-07 00:29:56.253230 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-04-07 00:29:56.253240 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-04-07 00:29:56.253251 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-04-07 00:29:56.253260 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2026-04-07 00:29:56.253270 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-04-07 00:29:56.253280 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-07 00:29:56.253290 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2026-04-07 00:29:56.253324 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-04-07 00:29:56.433353 | orchestrator | + osism apply bootstrap 2026-04-07 00:30:07.709806 | orchestrator | 2026-04-07 00:30:07 | INFO  | Prepare task for execution of bootstrap. 2026-04-07 00:30:07.785979 | orchestrator | 2026-04-07 00:30:07 | INFO  | Task 9a5b75b0-0266-43d1-8074-1894ff30e0d9 (bootstrap) was prepared for execution. 2026-04-07 00:30:07.786084 | orchestrator | 2026-04-07 00:30:07 | INFO  | It takes a moment until task 9a5b75b0-0266-43d1-8074-1894ff30e0d9 (bootstrap) has been started and output is visible here. 2026-04-07 00:30:23.981604 | orchestrator | 2026-04-07 00:30:23.981749 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-07 00:30:23.981769 | orchestrator | 2026-04-07 00:30:23.981781 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-07 00:30:23.981806 | orchestrator | Tuesday 07 April 2026 00:30:11 +0000 (0:00:00.202) 0:00:00.202 ********* 2026-04-07 00:30:23.981866 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:23.981881 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:23.981893 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:23.981904 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:23.981915 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:23.981926 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:23.981937 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:23.981948 | orchestrator | 2026-04-07 00:30:23.981959 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 00:30:23.981999 | orchestrator | 2026-04-07 00:30:23.982012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:30:23.982074 | orchestrator | Tuesday 07 April 2026 00:30:11 +0000 (0:00:00.319) 0:00:00.521 ********* 2026-04-07 00:30:23.982087 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:23.982099 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:23.982110 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:23.982125 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:23.982144 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:23.982163 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:23.982247 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:23.982266 | orchestrator | 2026-04-07 00:30:23.982278 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-07 00:30:23.982289 | orchestrator | 2026-04-07 00:30:23.982300 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:30:23.982311 | orchestrator | Tuesday 07 April 2026 00:30:16 +0000 (0:00:04.761) 0:00:05.283 ********* 2026-04-07 00:30:23.982323 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-07 00:30:23.982334 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 00:30:23.982345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-07 00:30:23.982356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 00:30:23.982367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:30:23.982378 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 00:30:23.982389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:30:23.982399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-07 00:30:23.982410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:30:23.982421 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-07 00:30:23.982432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-07 00:30:23.982443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 00:30:23.982454 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-07 00:30:23.982465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 00:30:23.982476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-07 00:30:23.982518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 00:30:23.982530 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:23.982541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-07 00:30:23.982551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 00:30:23.982562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 00:30:23.982573 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:23.982583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-07 00:30:23.982594 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 00:30:23.982604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 00:30:23.982615 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 00:30:23.982626 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 00:30:23.982636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 00:30:23.982663 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 00:30:23.982674 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 00:30:23.982684 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-07 00:30:23.982695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 00:30:23.982706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 00:30:23.982716 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-07 00:30:23.982727 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 00:30:23.982738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:30:23.982748 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-07 00:30:23.982759 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-07 00:30:23.982770 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 00:30:23.982780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 00:30:23.982791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:30:23.982802 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-07 00:30:23.982812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 00:30:23.982823 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:23.982834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:30:23.982845 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-07 00:30:23.982856 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:23.982889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-07 00:30:23.982901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-07 00:30:23.982912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-07 00:30:23.982922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-07 00:30:23.982933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:23.982944 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-07 00:30:23.982954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-07 00:30:23.983037 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:23.983051 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-07 00:30:23.983063 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:23.983074 | orchestrator | 2026-04-07 00:30:23.983084 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-07 00:30:23.983095 | orchestrator | 2026-04-07 00:30:23.983106 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-07 00:30:23.983117 | orchestrator | Tuesday 07 April 2026 00:30:16 +0000 (0:00:00.423) 0:00:05.707 ********* 2026-04-07 00:30:23.983128 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:23.983149 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:23.983160 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:23.983170 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:23.983181 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:23.983192 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:23.983202 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:23.983213 | orchestrator | 2026-04-07 00:30:23.983224 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-07 00:30:23.983235 | orchestrator | Tuesday 07 April 2026 00:30:18 +0000 (0:00:01.213) 0:00:06.920 ********* 2026-04-07 00:30:23.983245 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:23.983256 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:23.983267 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:23.983277 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:23.983288 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:23.983299 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:23.983309 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:23.983320 | orchestrator | 2026-04-07 00:30:23.983331 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-07 00:30:23.983342 | orchestrator | Tuesday 07 April 2026 00:30:19 +0000 (0:00:01.424) 0:00:08.344 ********* 2026-04-07 00:30:23.983354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:23.983367 | orchestrator | 2026-04-07 00:30:23.983378 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-07 00:30:23.983389 | orchestrator | Tuesday 07 April 2026 00:30:19 +0000 (0:00:00.282) 0:00:08.626 ********* 2026-04-07 00:30:23.983400 | orchestrator | changed: [testbed-manager] 2026-04-07 00:30:23.983411 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:23.983422 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:23.983434 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:23.983445 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:23.983455 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:23.983466 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:23.983477 | orchestrator | 2026-04-07 00:30:23.983488 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-07 00:30:23.983499 | orchestrator | Tuesday 07 April 2026 00:30:21 +0000 (0:00:01.597) 0:00:10.223 ********* 2026-04-07 00:30:23.983510 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:23.983523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:23.983536 | orchestrator | 2026-04-07 00:30:23.983547 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-07 00:30:23.983558 | orchestrator | Tuesday 07 April 2026 00:30:21 +0000 (0:00:00.278) 0:00:10.502 ********* 2026-04-07 00:30:23.983569 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:23.983580 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:23.983591 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:23.983602 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:23.983627 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:23.983648 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:23.983659 | orchestrator | 2026-04-07 00:30:23.983671 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-07 00:30:23.983682 | orchestrator | Tuesday 07 April 2026 00:30:22 +0000 (0:00:01.105) 0:00:11.608 ********* 2026-04-07 00:30:23.983693 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:23.983705 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:23.983730 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:23.983741 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:23.983752 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:23.983763 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:23.983793 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:23.983805 | orchestrator | 2026-04-07 00:30:23.983816 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-07 00:30:23.983827 | orchestrator | Tuesday 07 April 2026 00:30:23 +0000 (0:00:00.629) 0:00:12.238 ********* 2026-04-07 00:30:23.983838 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:23.983850 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:23.983860 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:23.983871 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:23.983882 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:23.983893 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:23.983904 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:23.983915 | orchestrator | 2026-04-07 00:30:23.983926 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-07 00:30:23.983938 | orchestrator | Tuesday 07 April 2026 00:30:23 +0000 (0:00:00.406) 0:00:12.645 ********* 2026-04-07 00:30:23.983950 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:23.983960 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:23.984041 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:36.981095 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:36.981150 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:36.981156 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:36.981160 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:36.981165 | orchestrator | 2026-04-07 00:30:36.981170 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-07 00:30:36.981174 | orchestrator | Tuesday 07 April 2026 00:30:24 +0000 (0:00:00.199) 0:00:12.844 ********* 2026-04-07 00:30:36.981180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:36.981192 | orchestrator | 2026-04-07 00:30:36.981196 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-07 00:30:36.981201 | orchestrator | Tuesday 07 April 2026 00:30:24 +0000 (0:00:00.305) 0:00:13.150 ********* 2026-04-07 00:30:36.981205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:36.981208 | orchestrator | 2026-04-07 00:30:36.981212 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-07 00:30:36.981216 | orchestrator | Tuesday 07 April 2026 00:30:24 +0000 (0:00:00.331) 0:00:13.482 ********* 2026-04-07 00:30:36.981220 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981224 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981228 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981232 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981235 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981239 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981243 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981247 | orchestrator | 2026-04-07 00:30:36.981250 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-07 00:30:36.981255 | orchestrator | Tuesday 07 April 2026 00:30:26 +0000 (0:00:01.513) 0:00:14.996 ********* 2026-04-07 00:30:36.981258 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:36.981262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:36.981266 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:36.981270 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:36.981274 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:36.981277 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:36.981281 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:36.981285 | orchestrator | 2026-04-07 00:30:36.981289 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-07 00:30:36.981304 | orchestrator | Tuesday 07 April 2026 00:30:26 +0000 (0:00:00.208) 0:00:15.204 ********* 2026-04-07 00:30:36.981308 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981312 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981316 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981320 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981323 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981327 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981331 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981334 | orchestrator | 2026-04-07 00:30:36.981338 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-07 00:30:36.981342 | orchestrator | Tuesday 07 April 2026 00:30:27 +0000 (0:00:00.713) 0:00:15.918 ********* 2026-04-07 00:30:36.981346 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:36.981349 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:36.981353 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:36.981357 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:36.981361 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:36.981364 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:36.981368 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:36.981372 | orchestrator | 2026-04-07 00:30:36.981376 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-07 00:30:36.981380 | orchestrator | Tuesday 07 April 2026 00:30:27 +0000 (0:00:00.242) 0:00:16.160 ********* 2026-04-07 00:30:36.981384 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981391 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:36.981396 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:36.981402 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:36.981412 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:36.981419 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:36.981425 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:36.981431 | orchestrator | 2026-04-07 00:30:36.981437 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-07 00:30:36.981443 | orchestrator | Tuesday 07 April 2026 00:30:28 +0000 (0:00:00.622) 0:00:16.782 ********* 2026-04-07 00:30:36.981450 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981456 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:36.981462 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:36.981469 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:36.981475 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:36.981482 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:36.981489 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:36.981496 | orchestrator | 2026-04-07 00:30:36.981503 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-07 00:30:36.981509 | orchestrator | Tuesday 07 April 2026 00:30:29 +0000 (0:00:01.331) 0:00:18.114 ********* 2026-04-07 00:30:36.981512 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981516 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981520 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981524 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981527 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981531 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981535 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981539 | orchestrator | 2026-04-07 00:30:36.981542 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-07 00:30:36.981546 | orchestrator | Tuesday 07 April 2026 00:30:30 +0000 (0:00:01.194) 0:00:19.309 ********* 2026-04-07 00:30:36.981559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:36.981563 | orchestrator | 2026-04-07 00:30:36.981567 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-07 00:30:36.981575 | orchestrator | Tuesday 07 April 2026 00:30:30 +0000 (0:00:00.314) 0:00:19.623 ********* 2026-04-07 00:30:36.981579 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:36.981582 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:36.981586 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:30:36.981590 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:36.981594 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:36.981597 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:30:36.981601 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:30:36.981605 | orchestrator | 2026-04-07 00:30:36.981608 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 00:30:36.981612 | orchestrator | Tuesday 07 April 2026 00:30:32 +0000 (0:00:01.508) 0:00:21.132 ********* 2026-04-07 00:30:36.981616 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981621 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981625 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981629 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981634 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981638 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981642 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981647 | orchestrator | 2026-04-07 00:30:36.981651 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 00:30:36.981655 | orchestrator | Tuesday 07 April 2026 00:30:32 +0000 (0:00:00.250) 0:00:21.382 ********* 2026-04-07 00:30:36.981660 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981664 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981669 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981673 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981677 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981681 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981686 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981690 | orchestrator | 2026-04-07 00:30:36.981694 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 00:30:36.981699 | orchestrator | Tuesday 07 April 2026 00:30:32 +0000 (0:00:00.206) 0:00:21.589 ********* 2026-04-07 00:30:36.981703 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981707 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981712 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981716 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981721 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981725 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981729 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981734 | orchestrator | 2026-04-07 00:30:36.981738 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 00:30:36.981743 | orchestrator | Tuesday 07 April 2026 00:30:33 +0000 (0:00:00.201) 0:00:21.790 ********* 2026-04-07 00:30:36.981748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:30:36.981753 | orchestrator | 2026-04-07 00:30:36.981758 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 00:30:36.981762 | orchestrator | Tuesday 07 April 2026 00:30:33 +0000 (0:00:00.311) 0:00:22.101 ********* 2026-04-07 00:30:36.981767 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981771 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981776 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981780 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981784 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981788 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981793 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981797 | orchestrator | 2026-04-07 00:30:36.981801 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 00:30:36.981805 | orchestrator | Tuesday 07 April 2026 00:30:33 +0000 (0:00:00.592) 0:00:22.694 ********* 2026-04-07 00:30:36.981810 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:30:36.981817 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:30:36.981821 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:30:36.981826 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:30:36.981831 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:30:36.981835 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:30:36.981840 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:30:36.981844 | orchestrator | 2026-04-07 00:30:36.981848 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 00:30:36.981853 | orchestrator | Tuesday 07 April 2026 00:30:34 +0000 (0:00:00.208) 0:00:22.903 ********* 2026-04-07 00:30:36.981857 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981862 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:36.981866 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:30:36.981870 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981875 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981879 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:30:36.981884 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981888 | orchestrator | 2026-04-07 00:30:36.981892 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 00:30:36.981897 | orchestrator | Tuesday 07 April 2026 00:30:35 +0000 (0:00:01.182) 0:00:24.085 ********* 2026-04-07 00:30:36.981901 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981906 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:30:36.981910 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981915 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981919 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:30:36.981923 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:30:36.981928 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:30:36.981932 | orchestrator | 2026-04-07 00:30:36.981936 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 00:30:36.981941 | orchestrator | Tuesday 07 April 2026 00:30:35 +0000 (0:00:00.626) 0:00:24.711 ********* 2026-04-07 00:30:36.981945 | orchestrator | ok: [testbed-manager] 2026-04-07 00:30:36.981949 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:30:36.981954 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:30:36.981958 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:30:36.981965 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.482421 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.483281 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.483312 | orchestrator | 2026-04-07 00:31:20.483321 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 00:31:20.483331 | orchestrator | Tuesday 07 April 2026 00:30:37 +0000 (0:00:01.105) 0:00:25.817 ********* 2026-04-07 00:31:20.483338 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483346 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483353 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483360 | orchestrator | changed: [testbed-manager] 2026-04-07 00:31:20.483367 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:31:20.483374 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.483380 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.483385 | orchestrator | 2026-04-07 00:31:20.483392 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-07 00:31:20.483399 | orchestrator | Tuesday 07 April 2026 00:30:55 +0000 (0:00:18.477) 0:00:44.294 ********* 2026-04-07 00:31:20.483406 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.483413 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.483420 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.483427 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.483434 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483441 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483447 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483454 | orchestrator | 2026-04-07 00:31:20.483460 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-07 00:31:20.483467 | orchestrator | Tuesday 07 April 2026 00:30:55 +0000 (0:00:00.234) 0:00:44.529 ********* 2026-04-07 00:31:20.483498 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.483505 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.483512 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.483518 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.483524 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483530 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483536 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483542 | orchestrator | 2026-04-07 00:31:20.483548 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-07 00:31:20.483554 | orchestrator | Tuesday 07 April 2026 00:30:55 +0000 (0:00:00.222) 0:00:44.752 ********* 2026-04-07 00:31:20.483560 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.483565 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.483571 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.483577 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.483583 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483589 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483595 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483601 | orchestrator | 2026-04-07 00:31:20.483607 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-07 00:31:20.483614 | orchestrator | Tuesday 07 April 2026 00:30:56 +0000 (0:00:00.215) 0:00:44.967 ********* 2026-04-07 00:31:20.483622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:31:20.483631 | orchestrator | 2026-04-07 00:31:20.483637 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-07 00:31:20.483643 | orchestrator | Tuesday 07 April 2026 00:30:56 +0000 (0:00:00.276) 0:00:45.244 ********* 2026-04-07 00:31:20.483648 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.483654 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.483660 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.483666 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483688 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483694 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483700 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.483706 | orchestrator | 2026-04-07 00:31:20.483712 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-07 00:31:20.483718 | orchestrator | Tuesday 07 April 2026 00:30:58 +0000 (0:00:02.176) 0:00:47.420 ********* 2026-04-07 00:31:20.483725 | orchestrator | changed: [testbed-manager] 2026-04-07 00:31:20.483731 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:31:20.483737 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.483744 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:31:20.483750 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.483756 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:31:20.483766 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:31:20.483773 | orchestrator | 2026-04-07 00:31:20.483778 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-07 00:31:20.483784 | orchestrator | Tuesday 07 April 2026 00:30:59 +0000 (0:00:01.249) 0:00:48.670 ********* 2026-04-07 00:31:20.483790 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.483796 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.483803 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.483808 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.483814 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.483820 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.483826 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.483832 | orchestrator | 2026-04-07 00:31:20.483838 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-07 00:31:20.483844 | orchestrator | Tuesday 07 April 2026 00:31:00 +0000 (0:00:00.927) 0:00:49.598 ********* 2026-04-07 00:31:20.483850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:31:20.483865 | orchestrator | 2026-04-07 00:31:20.483871 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-07 00:31:20.483878 | orchestrator | Tuesday 07 April 2026 00:31:01 +0000 (0:00:00.267) 0:00:49.865 ********* 2026-04-07 00:31:20.483884 | orchestrator | changed: [testbed-manager] 2026-04-07 00:31:20.483890 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:31:20.483898 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.483904 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:31:20.483910 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.483917 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:31:20.483923 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:31:20.483929 | orchestrator | 2026-04-07 00:31:20.483957 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-07 00:31:20.483964 | orchestrator | Tuesday 07 April 2026 00:31:02 +0000 (0:00:01.131) 0:00:50.997 ********* 2026-04-07 00:31:20.483971 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:31:20.483977 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:31:20.484054 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:31:20.484062 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:31:20.484069 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:31:20.484075 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:31:20.484081 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:31:20.484088 | orchestrator | 2026-04-07 00:31:20.484094 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-07 00:31:20.484100 | orchestrator | Tuesday 07 April 2026 00:31:02 +0000 (0:00:00.216) 0:00:51.213 ********* 2026-04-07 00:31:20.484107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:31:20.484113 | orchestrator | 2026-04-07 00:31:20.484119 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-07 00:31:20.484125 | orchestrator | Tuesday 07 April 2026 00:31:02 +0000 (0:00:00.293) 0:00:51.507 ********* 2026-04-07 00:31:20.484131 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.484137 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.484144 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.484150 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.484156 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.484162 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.484168 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.484174 | orchestrator | 2026-04-07 00:31:20.484180 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-07 00:31:20.484186 | orchestrator | Tuesday 07 April 2026 00:31:04 +0000 (0:00:02.257) 0:00:53.765 ********* 2026-04-07 00:31:20.484191 | orchestrator | changed: [testbed-manager] 2026-04-07 00:31:20.484197 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:31:20.484203 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.484208 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:31:20.484214 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.484220 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:31:20.484226 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:31:20.484231 | orchestrator | 2026-04-07 00:31:20.484237 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-07 00:31:20.484243 | orchestrator | Tuesday 07 April 2026 00:31:06 +0000 (0:00:01.174) 0:00:54.939 ********* 2026-04-07 00:31:20.484249 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:31:20.484255 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:31:20.484260 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:31:20.484266 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:31:20.484272 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:31:20.484279 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:31:20.484293 | orchestrator | changed: [testbed-manager] 2026-04-07 00:31:20.484299 | orchestrator | 2026-04-07 00:31:20.484305 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-07 00:31:20.484312 | orchestrator | Tuesday 07 April 2026 00:31:17 +0000 (0:00:10.893) 0:01:05.832 ********* 2026-04-07 00:31:20.484318 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.484324 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.484331 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.484337 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.484343 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.484350 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.484356 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.484362 | orchestrator | 2026-04-07 00:31:20.484369 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-07 00:31:20.484375 | orchestrator | Tuesday 07 April 2026 00:31:18 +0000 (0:00:01.644) 0:01:07.476 ********* 2026-04-07 00:31:20.484381 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.484387 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.484394 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.484400 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.484406 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.484413 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.484419 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.484425 | orchestrator | 2026-04-07 00:31:20.484437 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-07 00:31:20.484443 | orchestrator | Tuesday 07 April 2026 00:31:19 +0000 (0:00:01.025) 0:01:08.502 ********* 2026-04-07 00:31:20.484449 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.484456 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.484462 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.484468 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.484473 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.484479 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.484484 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.484490 | orchestrator | 2026-04-07 00:31:20.484496 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-07 00:31:20.484502 | orchestrator | Tuesday 07 April 2026 00:31:19 +0000 (0:00:00.213) 0:01:08.716 ********* 2026-04-07 00:31:20.484508 | orchestrator | ok: [testbed-manager] 2026-04-07 00:31:20.484514 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:31:20.484519 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:31:20.484525 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:31:20.484531 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:31:20.484536 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:31:20.484542 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:31:20.484548 | orchestrator | 2026-04-07 00:31:20.484553 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-07 00:31:20.484559 | orchestrator | Tuesday 07 April 2026 00:31:20 +0000 (0:00:00.237) 0:01:08.954 ********* 2026-04-07 00:31:20.484566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:31:20.484572 | orchestrator | 2026-04-07 00:31:20.484587 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-07 00:33:34.543669 | orchestrator | Tuesday 07 April 2026 00:31:20 +0000 (0:00:00.297) 0:01:09.251 ********* 2026-04-07 00:33:34.543754 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.543765 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.543772 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.543778 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.543783 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.543790 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.543795 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.543801 | orchestrator | 2026-04-07 00:33:34.543808 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-07 00:33:34.543832 | orchestrator | Tuesday 07 April 2026 00:31:22 +0000 (0:00:02.275) 0:01:11.527 ********* 2026-04-07 00:33:34.543838 | orchestrator | changed: [testbed-manager] 2026-04-07 00:33:34.543845 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:33:34.543851 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:33:34.543857 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:33:34.543863 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:33:34.543868 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:33:34.543874 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:33:34.543880 | orchestrator | 2026-04-07 00:33:34.543886 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-07 00:33:34.543892 | orchestrator | Tuesday 07 April 2026 00:31:23 +0000 (0:00:00.540) 0:01:12.067 ********* 2026-04-07 00:33:34.543898 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.543966 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.543973 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.543978 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.543984 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.543990 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.543995 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544001 | orchestrator | 2026-04-07 00:33:34.544007 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-07 00:33:34.544013 | orchestrator | Tuesday 07 April 2026 00:31:23 +0000 (0:00:00.203) 0:01:12.271 ********* 2026-04-07 00:33:34.544019 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.544025 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.544031 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.544036 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.544042 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.544048 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544053 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.544059 | orchestrator | 2026-04-07 00:33:34.544065 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-07 00:33:34.544070 | orchestrator | Tuesday 07 April 2026 00:31:24 +0000 (0:00:01.490) 0:01:13.761 ********* 2026-04-07 00:33:34.544076 | orchestrator | changed: [testbed-manager] 2026-04-07 00:33:34.544082 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:33:34.544088 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:33:34.544093 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:33:34.544099 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:33:34.544105 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:33:34.544110 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:33:34.544116 | orchestrator | 2026-04-07 00:33:34.544122 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-07 00:33:34.544127 | orchestrator | Tuesday 07 April 2026 00:31:27 +0000 (0:00:02.129) 0:01:15.891 ********* 2026-04-07 00:33:34.544133 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.544139 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.544145 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544151 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.544156 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.544162 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.544168 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.544174 | orchestrator | 2026-04-07 00:33:34.544179 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-07 00:33:34.544185 | orchestrator | Tuesday 07 April 2026 00:31:30 +0000 (0:00:02.945) 0:01:18.836 ********* 2026-04-07 00:33:34.544191 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.544196 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544202 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.544208 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.544213 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.544219 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.544225 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.544237 | orchestrator | 2026-04-07 00:33:34.544243 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-07 00:33:34.544260 | orchestrator | Tuesday 07 April 2026 00:32:03 +0000 (0:00:33.423) 0:01:52.260 ********* 2026-04-07 00:33:34.544268 | orchestrator | changed: [testbed-manager] 2026-04-07 00:33:34.544274 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:33:34.544281 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:33:34.544287 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:33:34.544294 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:33:34.544300 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:33:34.544307 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:33:34.544314 | orchestrator | 2026-04-07 00:33:34.544320 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-07 00:33:34.544327 | orchestrator | Tuesday 07 April 2026 00:33:20 +0000 (0:01:16.805) 0:03:09.065 ********* 2026-04-07 00:33:34.544333 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:34.544340 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.544347 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.544354 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.544361 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.544368 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544375 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.544381 | orchestrator | 2026-04-07 00:33:34.544387 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-07 00:33:34.544393 | orchestrator | Tuesday 07 April 2026 00:33:22 +0000 (0:00:02.168) 0:03:11.234 ********* 2026-04-07 00:33:34.544399 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:34.544405 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:34.544410 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:34.544416 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:34.544421 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:34.544427 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:34.544433 | orchestrator | changed: [testbed-manager] 2026-04-07 00:33:34.544438 | orchestrator | 2026-04-07 00:33:34.544444 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-07 00:33:34.544450 | orchestrator | Tuesday 07 April 2026 00:33:33 +0000 (0:00:11.028) 0:03:22.263 ********* 2026-04-07 00:33:34.544477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-07 00:33:34.544491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-07 00:33:34.544500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-07 00:33:34.544507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-07 00:33:34.544517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-07 00:33:34.544526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-07 00:33:34.544532 | orchestrator | 2026-04-07 00:33:34.544538 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-07 00:33:34.544544 | orchestrator | Tuesday 07 April 2026 00:33:33 +0000 (0:00:00.365) 0:03:22.628 ********* 2026-04-07 00:33:34.544550 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 00:33:34.544556 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:34.544562 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 00:33:34.544568 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:33:34.544574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 00:33:34.544579 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:33:34.544585 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 00:33:34.544591 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:33:34.544597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 00:33:34.544607 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 00:33:34.544613 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 00:33:34.544619 | orchestrator | 2026-04-07 00:33:34.544624 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-07 00:33:34.544630 | orchestrator | Tuesday 07 April 2026 00:33:34 +0000 (0:00:00.627) 0:03:23.255 ********* 2026-04-07 00:33:34.544636 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 00:33:34.544643 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 00:33:34.544648 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 00:33:34.544654 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 00:33:34.544660 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 00:33:34.544669 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 00:33:43.929959 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 00:33:43.930138 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 00:33:43.930159 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 00:33:43.930172 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 00:33:43.930185 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:43.930197 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 00:33:43.930208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 00:33:43.930220 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 00:33:43.930261 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 00:33:43.930273 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 00:33:43.930281 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 00:33:43.930288 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 00:33:43.930297 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 00:33:43.930309 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 00:33:43.930322 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 00:33:43.930334 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 00:33:43.930342 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 00:33:43.930353 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 00:33:43.930366 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 00:33:43.930375 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 00:33:43.930384 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:33:43.930395 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 00:33:43.930406 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 00:33:43.930418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 00:33:43.930426 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 00:33:43.930437 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 00:33:43.930450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 00:33:43.930460 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:33:43.930469 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 00:33:43.930497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 00:33:43.930507 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 00:33:43.930518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 00:33:43.930528 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 00:33:43.930538 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 00:33:43.930547 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 00:33:43.930558 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 00:33:43.930569 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 00:33:43.930578 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:33:43.930586 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 00:33:43.930597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 00:33:43.930606 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 00:33:43.930637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 00:33:43.930645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 00:33:43.930674 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 00:33:43.930683 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930690 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 00:33:43.930713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 00:33:43.930729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 00:33:43.930737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 00:33:43.930743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 00:33:43.930748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 00:33:43.930753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 00:33:43.930758 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 00:33:43.930763 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 00:33:43.930768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 00:33:43.930772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 00:33:43.930782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 00:33:43.930786 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 00:33:43.930791 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 00:33:43.930796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 00:33:43.930801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 00:33:43.930805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 00:33:43.930810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 00:33:43.930815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 00:33:43.930820 | orchestrator | 2026-04-07 00:33:43.930825 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-07 00:33:43.930830 | orchestrator | Tuesday 07 April 2026 00:33:42 +0000 (0:00:08.211) 0:03:31.467 ********* 2026-04-07 00:33:43.930835 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930854 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930869 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930874 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 00:33:43.930879 | orchestrator | 2026-04-07 00:33:43.930884 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-07 00:33:43.930889 | orchestrator | Tuesday 07 April 2026 00:33:43 +0000 (0:00:00.685) 0:03:32.153 ********* 2026-04-07 00:33:43.930937 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:43.930943 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:43.930948 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:43.930952 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:43.930957 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:33:43.930962 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:43.930967 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:33:43.930972 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:33:43.930976 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:43.930981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:43.930993 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:58.488736 | orchestrator | 2026-04-07 00:33:58.488828 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-07 00:33:58.488837 | orchestrator | Tuesday 07 April 2026 00:33:43 +0000 (0:00:00.583) 0:03:32.736 ********* 2026-04-07 00:33:58.488844 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:58.488852 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:58.488859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:58.488866 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:33:58.488912 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:58.488921 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:33:58.488928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 00:33:58.488935 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:33:58.488942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:58.488949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:58.488956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 00:33:58.488962 | orchestrator | 2026-04-07 00:33:58.488968 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-07 00:33:58.488975 | orchestrator | Tuesday 07 April 2026 00:33:45 +0000 (0:00:01.521) 0:03:34.257 ********* 2026-04-07 00:33:58.488982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 00:33:58.488988 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:58.489003 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 00:33:58.489010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:33:58.489016 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 00:33:58.489050 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 00:33:58.489056 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:33:58.489063 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:33:58.489070 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 00:33:58.489076 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 00:33:58.489082 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 00:33:58.489088 | orchestrator | 2026-04-07 00:33:58.489095 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-07 00:33:58.489101 | orchestrator | Tuesday 07 April 2026 00:33:47 +0000 (0:00:01.629) 0:03:35.886 ********* 2026-04-07 00:33:58.489107 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:58.489114 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:33:58.489120 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:33:58.489126 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:33:58.489131 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:33:58.489137 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:33:58.489143 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:33:58.489149 | orchestrator | 2026-04-07 00:33:58.489154 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-07 00:33:58.489160 | orchestrator | Tuesday 07 April 2026 00:33:47 +0000 (0:00:00.266) 0:03:36.153 ********* 2026-04-07 00:33:58.489166 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:58.489173 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:58.489179 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:58.489185 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:58.489191 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:58.489196 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:58.489202 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:58.489208 | orchestrator | 2026-04-07 00:33:58.489214 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-07 00:33:58.489220 | orchestrator | Tuesday 07 April 2026 00:33:53 +0000 (0:00:05.645) 0:03:41.799 ********* 2026-04-07 00:33:58.489226 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-07 00:33:58.489232 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-07 00:33:58.489239 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:33:58.489246 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-07 00:33:58.489253 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:33:58.489259 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:33:58.489265 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-07 00:33:58.489272 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:33:58.489278 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-07 00:33:58.489285 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-07 00:33:58.489292 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:33:58.489299 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:33:58.489306 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-07 00:33:58.489313 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:33:58.489319 | orchestrator | 2026-04-07 00:33:58.489326 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-07 00:33:58.489334 | orchestrator | Tuesday 07 April 2026 00:33:53 +0000 (0:00:00.264) 0:03:42.063 ********* 2026-04-07 00:33:58.489341 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-07 00:33:58.489348 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-07 00:33:58.489355 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-07 00:33:58.489378 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-07 00:33:58.489385 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-07 00:33:58.489392 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-07 00:33:58.489404 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-07 00:33:58.489410 | orchestrator | 2026-04-07 00:33:58.489417 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-07 00:33:58.489423 | orchestrator | Tuesday 07 April 2026 00:33:54 +0000 (0:00:01.025) 0:03:43.089 ********* 2026-04-07 00:33:58.489432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:33:58.489440 | orchestrator | 2026-04-07 00:33:58.489446 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-07 00:33:58.489452 | orchestrator | Tuesday 07 April 2026 00:33:54 +0000 (0:00:00.351) 0:03:43.441 ********* 2026-04-07 00:33:58.489459 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:58.489465 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:58.489472 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:58.489478 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:58.489484 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:58.489491 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:58.489497 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:58.489503 | orchestrator | 2026-04-07 00:33:58.489509 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-07 00:33:58.489516 | orchestrator | Tuesday 07 April 2026 00:33:56 +0000 (0:00:01.399) 0:03:44.840 ********* 2026-04-07 00:33:58.489522 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:58.489529 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:58.489535 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:58.489543 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:58.489550 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:58.489557 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:58.489564 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:58.489571 | orchestrator | 2026-04-07 00:33:58.489577 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-07 00:33:58.489583 | orchestrator | Tuesday 07 April 2026 00:33:56 +0000 (0:00:00.634) 0:03:45.475 ********* 2026-04-07 00:33:58.489588 | orchestrator | changed: [testbed-manager] 2026-04-07 00:33:58.489610 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:33:58.489618 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:33:58.489625 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:33:58.489632 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:33:58.489640 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:33:58.489646 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:33:58.489653 | orchestrator | 2026-04-07 00:33:58.489659 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-07 00:33:58.489665 | orchestrator | Tuesday 07 April 2026 00:33:57 +0000 (0:00:00.668) 0:03:46.143 ********* 2026-04-07 00:33:58.489672 | orchestrator | ok: [testbed-manager] 2026-04-07 00:33:58.489678 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:33:58.489684 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:33:58.489691 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:33:58.489697 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:33:58.489703 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:33:58.489709 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:33:58.489714 | orchestrator | 2026-04-07 00:33:58.489720 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-07 00:33:58.489726 | orchestrator | Tuesday 07 April 2026 00:33:57 +0000 (0:00:00.588) 0:03:46.732 ********* 2026-04-07 00:33:58.489737 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520349.9151702, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:33:58.489749 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520309.583479, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:33:58.489755 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520353.8745494, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:33:58.489775 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520374.0729034, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.799773 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520337.537998, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800010 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520365.499298, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800042 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775520364.4820611, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800084 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800139 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800161 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800182 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800239 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800261 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800283 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 00:34:03.800305 | orchestrator | 2026-04-07 00:34:03.800328 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-07 00:34:03.800352 | orchestrator | Tuesday 07 April 2026 00:33:58 +0000 (0:00:01.007) 0:03:47.739 ********* 2026-04-07 00:34:03.800373 | orchestrator | changed: [testbed-manager] 2026-04-07 00:34:03.800396 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:34:03.800416 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:34:03.800444 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:34:03.800467 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:34:03.800488 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:34:03.800506 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:34:03.800523 | orchestrator | 2026-04-07 00:34:03.800541 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-07 00:34:03.800559 | orchestrator | Tuesday 07 April 2026 00:33:59 +0000 (0:00:01.036) 0:03:48.776 ********* 2026-04-07 00:34:03.800577 | orchestrator | changed: [testbed-manager] 2026-04-07 00:34:03.800594 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:34:03.800612 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:34:03.800648 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:34:03.800667 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:34:03.800685 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:34:03.800703 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:34:03.800722 | orchestrator | 2026-04-07 00:34:03.800741 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-07 00:34:03.800758 | orchestrator | Tuesday 07 April 2026 00:34:01 +0000 (0:00:01.137) 0:03:49.914 ********* 2026-04-07 00:34:03.800776 | orchestrator | changed: [testbed-manager] 2026-04-07 00:34:03.800795 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:34:03.800812 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:34:03.800829 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:34:03.800846 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:34:03.800897 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:34:03.800917 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:34:03.800935 | orchestrator | 2026-04-07 00:34:03.800953 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-07 00:34:03.800971 | orchestrator | Tuesday 07 April 2026 00:34:02 +0000 (0:00:01.260) 0:03:51.175 ********* 2026-04-07 00:34:03.800989 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:34:03.801006 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:34:03.801024 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:34:03.801043 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:34:03.801062 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:34:03.801080 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:34:03.801098 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:34:03.801111 | orchestrator | 2026-04-07 00:34:03.801121 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-07 00:34:03.801132 | orchestrator | Tuesday 07 April 2026 00:34:02 +0000 (0:00:00.233) 0:03:51.409 ********* 2026-04-07 00:34:03.801143 | orchestrator | ok: [testbed-manager] 2026-04-07 00:34:03.801155 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:34:03.801166 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:34:03.801176 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:34:03.801187 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:34:03.801198 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:34:03.801208 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:34:03.801219 | orchestrator | 2026-04-07 00:34:03.801230 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-07 00:34:03.801241 | orchestrator | Tuesday 07 April 2026 00:34:03 +0000 (0:00:00.760) 0:03:52.169 ********* 2026-04-07 00:34:03.801254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:34:03.801267 | orchestrator | 2026-04-07 00:34:03.801278 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-07 00:34:03.801303 | orchestrator | Tuesday 07 April 2026 00:34:03 +0000 (0:00:00.400) 0:03:52.570 ********* 2026-04-07 00:35:26.997180 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.997310 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:26.997328 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:26.997414 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:26.997429 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:26.997440 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:26.997451 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:26.997464 | orchestrator | 2026-04-07 00:35:26.997476 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-07 00:35:26.997489 | orchestrator | Tuesday 07 April 2026 00:34:13 +0000 (0:00:09.997) 0:04:02.567 ********* 2026-04-07 00:35:26.997499 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.997510 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.997521 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.997532 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.997543 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.997554 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.997564 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.997575 | orchestrator | 2026-04-07 00:35:26.997586 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-07 00:35:26.997597 | orchestrator | Tuesday 07 April 2026 00:34:15 +0000 (0:00:01.391) 0:04:03.959 ********* 2026-04-07 00:35:26.997608 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.997618 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.997629 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.997640 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.997650 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.997661 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.997672 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.997682 | orchestrator | 2026-04-07 00:35:26.997693 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-07 00:35:26.997704 | orchestrator | Tuesday 07 April 2026 00:34:16 +0000 (0:00:00.950) 0:04:04.909 ********* 2026-04-07 00:35:26.997716 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.997729 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.997741 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.997753 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.997765 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.997803 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.997816 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.997829 | orchestrator | 2026-04-07 00:35:26.997841 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-07 00:35:26.997855 | orchestrator | Tuesday 07 April 2026 00:34:16 +0000 (0:00:00.258) 0:04:05.168 ********* 2026-04-07 00:35:26.997868 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.997880 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.997892 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.997903 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.997916 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.997927 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.997940 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.997952 | orchestrator | 2026-04-07 00:35:26.997965 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-07 00:35:26.997978 | orchestrator | Tuesday 07 April 2026 00:34:16 +0000 (0:00:00.237) 0:04:05.406 ********* 2026-04-07 00:35:26.997990 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.998003 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.998059 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.998076 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.998089 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.998100 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.998111 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.998122 | orchestrator | 2026-04-07 00:35:26.998133 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-07 00:35:26.998144 | orchestrator | Tuesday 07 April 2026 00:34:16 +0000 (0:00:00.223) 0:04:05.630 ********* 2026-04-07 00:35:26.998154 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.998165 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.998176 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.998197 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.998208 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.998219 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.998230 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.998241 | orchestrator | 2026-04-07 00:35:26.998251 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-07 00:35:26.998262 | orchestrator | Tuesday 07 April 2026 00:34:22 +0000 (0:00:05.684) 0:04:11.314 ********* 2026-04-07 00:35:26.998275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:35:26.998290 | orchestrator | 2026-04-07 00:35:26.998301 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-07 00:35:26.998312 | orchestrator | Tuesday 07 April 2026 00:34:22 +0000 (0:00:00.362) 0:04:11.677 ********* 2026-04-07 00:35:26.998323 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998333 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-07 00:35:26.998344 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:26.998355 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998366 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-07 00:35:26.998376 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998387 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-07 00:35:26.998398 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:26.998409 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:26.998419 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998430 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-07 00:35:26.998441 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998451 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-07 00:35:26.998462 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:26.998473 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998484 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-07 00:35:26.998513 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:26.998525 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:26.998536 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-07 00:35:26.998546 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-07 00:35:26.998557 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:26.998568 | orchestrator | 2026-04-07 00:35:26.998579 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-07 00:35:26.998589 | orchestrator | Tuesday 07 April 2026 00:34:23 +0000 (0:00:00.317) 0:04:11.995 ********* 2026-04-07 00:35:26.998601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:35:26.998612 | orchestrator | 2026-04-07 00:35:26.998622 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-07 00:35:26.998633 | orchestrator | Tuesday 07 April 2026 00:34:23 +0000 (0:00:00.487) 0:04:12.482 ********* 2026-04-07 00:35:26.998644 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-07 00:35:26.998654 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-07 00:35:26.998665 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:26.998676 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-07 00:35:26.998686 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:26.998697 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-07 00:35:26.998715 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:26.998725 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-07 00:35:26.998736 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:26.998763 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-07 00:35:26.998796 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:26.998808 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:26.998818 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-07 00:35:26.998829 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:26.998840 | orchestrator | 2026-04-07 00:35:26.998851 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-07 00:35:26.998861 | orchestrator | Tuesday 07 April 2026 00:34:24 +0000 (0:00:00.325) 0:04:12.808 ********* 2026-04-07 00:35:26.998872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:35:26.998884 | orchestrator | 2026-04-07 00:35:26.998894 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-07 00:35:26.998905 | orchestrator | Tuesday 07 April 2026 00:34:24 +0000 (0:00:00.370) 0:04:13.178 ********* 2026-04-07 00:35:26.998921 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:26.998932 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:26.998943 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:26.998954 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:26.998964 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:26.998975 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:26.998986 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:26.998997 | orchestrator | 2026-04-07 00:35:26.999008 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-07 00:35:26.999018 | orchestrator | Tuesday 07 April 2026 00:35:00 +0000 (0:00:36.243) 0:04:49.422 ********* 2026-04-07 00:35:26.999029 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:26.999040 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:26.999051 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:26.999061 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:26.999072 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:26.999082 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:26.999093 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:26.999104 | orchestrator | 2026-04-07 00:35:26.999115 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-07 00:35:26.999125 | orchestrator | Tuesday 07 April 2026 00:35:09 +0000 (0:00:08.962) 0:04:58.385 ********* 2026-04-07 00:35:26.999136 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:26.999147 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:26.999157 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:26.999168 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:26.999179 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:26.999189 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:26.999200 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:26.999210 | orchestrator | 2026-04-07 00:35:26.999221 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-07 00:35:26.999232 | orchestrator | Tuesday 07 April 2026 00:35:18 +0000 (0:00:08.731) 0:05:07.116 ********* 2026-04-07 00:35:26.999243 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:26.999253 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:26.999264 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:26.999275 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:26.999285 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:26.999310 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:26.999322 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:26.999342 | orchestrator | 2026-04-07 00:35:26.999353 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-07 00:35:26.999372 | orchestrator | Tuesday 07 April 2026 00:35:20 +0000 (0:00:01.959) 0:05:09.076 ********* 2026-04-07 00:35:26.999382 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:26.999393 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:26.999404 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:26.999414 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:26.999425 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:26.999436 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:26.999446 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:26.999457 | orchestrator | 2026-04-07 00:35:26.999475 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-07 00:35:37.661758 | orchestrator | Tuesday 07 April 2026 00:35:26 +0000 (0:00:06.688) 0:05:15.765 ********* 2026-04-07 00:35:37.661959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:35:37.661988 | orchestrator | 2026-04-07 00:35:37.662008 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-07 00:35:37.662093 | orchestrator | Tuesday 07 April 2026 00:35:27 +0000 (0:00:00.402) 0:05:16.167 ********* 2026-04-07 00:35:37.662113 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:37.662134 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:37.662153 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:37.662171 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:37.662190 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:37.662210 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:37.662229 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:37.662241 | orchestrator | 2026-04-07 00:35:37.662252 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-07 00:35:37.662264 | orchestrator | Tuesday 07 April 2026 00:35:28 +0000 (0:00:00.830) 0:05:16.998 ********* 2026-04-07 00:35:37.662274 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:37.662286 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:37.662299 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:37.662313 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:37.662326 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:37.662338 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:37.662351 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:37.662364 | orchestrator | 2026-04-07 00:35:37.662376 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-07 00:35:37.662389 | orchestrator | Tuesday 07 April 2026 00:35:30 +0000 (0:00:01.828) 0:05:18.827 ********* 2026-04-07 00:35:37.662401 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:35:37.662414 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:35:37.662426 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:35:37.662439 | orchestrator | changed: [testbed-manager] 2026-04-07 00:35:37.662452 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:35:37.662465 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:35:37.662477 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:35:37.662489 | orchestrator | 2026-04-07 00:35:37.662502 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-07 00:35:37.662515 | orchestrator | Tuesday 07 April 2026 00:35:30 +0000 (0:00:00.745) 0:05:19.572 ********* 2026-04-07 00:35:37.662528 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.662540 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.662554 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.662565 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:37.662576 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:37.662587 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:37.662597 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:37.662608 | orchestrator | 2026-04-07 00:35:37.662619 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-07 00:35:37.662670 | orchestrator | Tuesday 07 April 2026 00:35:31 +0000 (0:00:00.254) 0:05:19.827 ********* 2026-04-07 00:35:37.662682 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.662692 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.662703 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.662714 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:37.662724 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:37.662735 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:37.662745 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:37.662756 | orchestrator | 2026-04-07 00:35:37.662802 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-07 00:35:37.662814 | orchestrator | Tuesday 07 April 2026 00:35:31 +0000 (0:00:00.369) 0:05:20.196 ********* 2026-04-07 00:35:37.662825 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:37.662836 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:37.662847 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:37.662858 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:37.662869 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:37.662879 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:37.662890 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:37.662900 | orchestrator | 2026-04-07 00:35:37.662911 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-07 00:35:37.662922 | orchestrator | Tuesday 07 April 2026 00:35:31 +0000 (0:00:00.398) 0:05:20.595 ********* 2026-04-07 00:35:37.662933 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.662944 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.662954 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.662965 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:37.662975 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:37.662986 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:37.662997 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:37.663007 | orchestrator | 2026-04-07 00:35:37.663019 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-07 00:35:37.663030 | orchestrator | Tuesday 07 April 2026 00:35:32 +0000 (0:00:00.262) 0:05:20.857 ********* 2026-04-07 00:35:37.663041 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:37.663052 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:37.663062 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:37.663073 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:37.663084 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:37.663095 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:37.663105 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:37.663116 | orchestrator | 2026-04-07 00:35:37.663126 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-07 00:35:37.663137 | orchestrator | Tuesday 07 April 2026 00:35:32 +0000 (0:00:00.292) 0:05:21.150 ********* 2026-04-07 00:35:37.663148 | orchestrator | ok: [testbed-manager] =>  2026-04-07 00:35:37.663159 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663169 | orchestrator | ok: [testbed-node-0] =>  2026-04-07 00:35:37.663180 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663191 | orchestrator | ok: [testbed-node-1] =>  2026-04-07 00:35:37.663202 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663212 | orchestrator | ok: [testbed-node-2] =>  2026-04-07 00:35:37.663223 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663254 | orchestrator | ok: [testbed-node-3] =>  2026-04-07 00:35:37.663266 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663277 | orchestrator | ok: [testbed-node-4] =>  2026-04-07 00:35:37.663288 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663298 | orchestrator | ok: [testbed-node-5] =>  2026-04-07 00:35:37.663309 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 00:35:37.663320 | orchestrator | 2026-04-07 00:35:37.663330 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-07 00:35:37.663341 | orchestrator | Tuesday 07 April 2026 00:35:32 +0000 (0:00:00.231) 0:05:21.381 ********* 2026-04-07 00:35:37.663361 | orchestrator | ok: [testbed-manager] =>  2026-04-07 00:35:37.663371 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663382 | orchestrator | ok: [testbed-node-0] =>  2026-04-07 00:35:37.663393 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663403 | orchestrator | ok: [testbed-node-1] =>  2026-04-07 00:35:37.663414 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663424 | orchestrator | ok: [testbed-node-2] =>  2026-04-07 00:35:37.663435 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663445 | orchestrator | ok: [testbed-node-3] =>  2026-04-07 00:35:37.663456 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663467 | orchestrator | ok: [testbed-node-4] =>  2026-04-07 00:35:37.663477 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663488 | orchestrator | ok: [testbed-node-5] =>  2026-04-07 00:35:37.663499 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 00:35:37.663509 | orchestrator | 2026-04-07 00:35:37.663520 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-07 00:35:37.663531 | orchestrator | Tuesday 07 April 2026 00:35:32 +0000 (0:00:00.261) 0:05:21.643 ********* 2026-04-07 00:35:37.663541 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.663552 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.663562 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.663573 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:37.663583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:37.663594 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:37.663605 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:37.663615 | orchestrator | 2026-04-07 00:35:37.663626 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-07 00:35:37.663637 | orchestrator | Tuesday 07 April 2026 00:35:33 +0000 (0:00:00.258) 0:05:21.901 ********* 2026-04-07 00:35:37.663648 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.663658 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.663669 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.663679 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:35:37.663690 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:35:37.663701 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:35:37.663712 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:35:37.663722 | orchestrator | 2026-04-07 00:35:37.663733 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-07 00:35:37.663744 | orchestrator | Tuesday 07 April 2026 00:35:33 +0000 (0:00:00.234) 0:05:22.136 ********* 2026-04-07 00:35:37.663763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:35:37.663814 | orchestrator | 2026-04-07 00:35:37.663825 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-07 00:35:37.663836 | orchestrator | Tuesday 07 April 2026 00:35:33 +0000 (0:00:00.401) 0:05:22.537 ********* 2026-04-07 00:35:37.663847 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:37.663857 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:37.663868 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:37.663879 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:37.663889 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:37.663900 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:37.663910 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:37.663921 | orchestrator | 2026-04-07 00:35:37.663932 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-07 00:35:37.663943 | orchestrator | Tuesday 07 April 2026 00:35:34 +0000 (0:00:00.764) 0:05:23.302 ********* 2026-04-07 00:35:37.663953 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:35:37.663964 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:35:37.663974 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:35:37.663985 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:35:37.664002 | orchestrator | ok: [testbed-manager] 2026-04-07 00:35:37.664013 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:35:37.664023 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:35:37.664034 | orchestrator | 2026-04-07 00:35:37.664045 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-07 00:35:37.664057 | orchestrator | Tuesday 07 April 2026 00:35:37 +0000 (0:00:02.825) 0:05:26.128 ********* 2026-04-07 00:35:37.664067 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-07 00:35:37.664079 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-07 00:35:37.664089 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-07 00:35:37.664100 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-07 00:35:37.664111 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-07 00:35:37.664121 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-07 00:35:37.664132 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:35:37.664143 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-07 00:35:37.664153 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-07 00:35:37.664164 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-07 00:35:37.664175 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:35:37.664186 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-07 00:35:37.664197 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-07 00:35:37.664207 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-07 00:35:37.664218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:35:37.664229 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-07 00:35:37.664248 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-07 00:36:41.697853 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:36:41.697968 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-07 00:36:41.697984 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-07 00:36:41.697996 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-07 00:36:41.698007 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-07 00:36:41.698077 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:36:41.698090 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:36:41.698101 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-07 00:36:41.698112 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-07 00:36:41.698156 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-07 00:36:41.698168 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:36:41.698180 | orchestrator | 2026-04-07 00:36:41.698208 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-07 00:36:41.698221 | orchestrator | Tuesday 07 April 2026 00:35:37 +0000 (0:00:00.492) 0:05:26.621 ********* 2026-04-07 00:36:41.698232 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.698243 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698254 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698264 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698275 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698286 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698297 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698308 | orchestrator | 2026-04-07 00:36:41.698319 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-07 00:36:41.698333 | orchestrator | Tuesday 07 April 2026 00:35:45 +0000 (0:00:07.280) 0:05:33.901 ********* 2026-04-07 00:36:41.698346 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.698358 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698370 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698383 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698396 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698434 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698446 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698457 | orchestrator | 2026-04-07 00:36:41.698468 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-07 00:36:41.698478 | orchestrator | Tuesday 07 April 2026 00:35:46 +0000 (0:00:00.972) 0:05:34.873 ********* 2026-04-07 00:36:41.698489 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.698500 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698510 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698521 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698532 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698542 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698553 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698564 | orchestrator | 2026-04-07 00:36:41.698575 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-07 00:36:41.698586 | orchestrator | Tuesday 07 April 2026 00:35:55 +0000 (0:00:09.111) 0:05:43.985 ********* 2026-04-07 00:36:41.698597 | orchestrator | changed: [testbed-manager] 2026-04-07 00:36:41.698608 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698632 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698644 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698655 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698665 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698676 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698687 | orchestrator | 2026-04-07 00:36:41.698698 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-07 00:36:41.698732 | orchestrator | Tuesday 07 April 2026 00:35:58 +0000 (0:00:03.400) 0:05:47.386 ********* 2026-04-07 00:36:41.698743 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.698753 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698764 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698775 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698785 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698796 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698807 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698817 | orchestrator | 2026-04-07 00:36:41.698828 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-07 00:36:41.698839 | orchestrator | Tuesday 07 April 2026 00:35:59 +0000 (0:00:01.337) 0:05:48.723 ********* 2026-04-07 00:36:41.698850 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.698860 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.698871 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.698881 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.698892 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.698903 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.698913 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.698924 | orchestrator | 2026-04-07 00:36:41.698935 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-07 00:36:41.698946 | orchestrator | Tuesday 07 April 2026 00:36:01 +0000 (0:00:01.324) 0:05:50.048 ********* 2026-04-07 00:36:41.698956 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:36:41.698967 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:36:41.698978 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:36:41.698989 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:36:41.698999 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:36:41.699010 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:36:41.699021 | orchestrator | changed: [testbed-manager] 2026-04-07 00:36:41.699032 | orchestrator | 2026-04-07 00:36:41.699042 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-07 00:36:41.699053 | orchestrator | Tuesday 07 April 2026 00:36:01 +0000 (0:00:00.594) 0:05:50.642 ********* 2026-04-07 00:36:41.699064 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.699075 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.699085 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.699103 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.699114 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.699125 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.699135 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.699146 | orchestrator | 2026-04-07 00:36:41.699157 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-07 00:36:41.699185 | orchestrator | Tuesday 07 April 2026 00:36:12 +0000 (0:00:10.933) 0:06:01.576 ********* 2026-04-07 00:36:41.699197 | orchestrator | changed: [testbed-manager] 2026-04-07 00:36:41.699208 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.699218 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.699229 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.699239 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.699250 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.699260 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.699271 | orchestrator | 2026-04-07 00:36:41.699282 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-07 00:36:41.699293 | orchestrator | Tuesday 07 April 2026 00:36:13 +0000 (0:00:00.944) 0:06:02.520 ********* 2026-04-07 00:36:41.699304 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.699314 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.699325 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.699336 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.699346 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.699357 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.699368 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.699379 | orchestrator | 2026-04-07 00:36:41.699389 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-07 00:36:41.699400 | orchestrator | Tuesday 07 April 2026 00:36:23 +0000 (0:00:09.763) 0:06:12.284 ********* 2026-04-07 00:36:41.699411 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.699422 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.699432 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.699443 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.699454 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.699465 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.699475 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.699486 | orchestrator | 2026-04-07 00:36:41.699497 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-07 00:36:41.699507 | orchestrator | Tuesday 07 April 2026 00:36:35 +0000 (0:00:11.754) 0:06:24.039 ********* 2026-04-07 00:36:41.699518 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-07 00:36:41.699529 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-07 00:36:41.699540 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-07 00:36:41.699551 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-07 00:36:41.699562 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-07 00:36:41.699572 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-07 00:36:41.699583 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-07 00:36:41.699593 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-07 00:36:41.699604 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-07 00:36:41.699615 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-07 00:36:41.699625 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-07 00:36:41.699636 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-07 00:36:41.699647 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-07 00:36:41.699657 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-07 00:36:41.699668 | orchestrator | 2026-04-07 00:36:41.699679 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-07 00:36:41.699690 | orchestrator | Tuesday 07 April 2026 00:36:36 +0000 (0:00:01.169) 0:06:25.208 ********* 2026-04-07 00:36:41.699737 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:36:41.699749 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:36:41.699759 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:36:41.699770 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:36:41.699781 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:36:41.699791 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:36:41.699802 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:36:41.699812 | orchestrator | 2026-04-07 00:36:41.699823 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-07 00:36:41.699834 | orchestrator | Tuesday 07 April 2026 00:36:36 +0000 (0:00:00.529) 0:06:25.738 ********* 2026-04-07 00:36:41.699844 | orchestrator | ok: [testbed-manager] 2026-04-07 00:36:41.699855 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:36:41.699866 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:36:41.699876 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:36:41.699887 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:36:41.699898 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:36:41.699908 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:36:41.699919 | orchestrator | 2026-04-07 00:36:41.699930 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-07 00:36:41.699941 | orchestrator | Tuesday 07 April 2026 00:36:40 +0000 (0:00:03.986) 0:06:29.725 ********* 2026-04-07 00:36:41.699952 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:36:41.699963 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:36:41.699973 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:36:41.699984 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:36:41.699994 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:36:41.700009 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:36:41.700027 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:36:41.700046 | orchestrator | 2026-04-07 00:36:41.700078 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-07 00:36:41.700095 | orchestrator | Tuesday 07 April 2026 00:36:41 +0000 (0:00:00.473) 0:06:30.199 ********* 2026-04-07 00:36:41.700112 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-07 00:36:41.700130 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-07 00:36:41.700196 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:36:41.700215 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-07 00:36:41.700231 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-07 00:36:41.700248 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:36:41.700266 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-07 00:36:41.700284 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-07 00:36:41.700303 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:36:41.700335 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-07 00:37:00.412599 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-07 00:37:00.412740 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:00.412758 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-07 00:37:00.412769 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-07 00:37:00.412781 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:00.412792 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-07 00:37:00.412803 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-07 00:37:00.412814 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:00.412825 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-07 00:37:00.412836 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-07 00:37:00.412847 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:00.412859 | orchestrator | 2026-04-07 00:37:00.412872 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-07 00:37:00.412905 | orchestrator | Tuesday 07 April 2026 00:36:41 +0000 (0:00:00.513) 0:06:30.713 ********* 2026-04-07 00:37:00.412917 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:00.412927 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:00.412938 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:00.412948 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:00.412959 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:00.412970 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:00.412980 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:00.412991 | orchestrator | 2026-04-07 00:37:00.413002 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-07 00:37:00.413013 | orchestrator | Tuesday 07 April 2026 00:36:42 +0000 (0:00:00.503) 0:06:31.216 ********* 2026-04-07 00:37:00.413024 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:00.413034 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:00.413045 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:00.413056 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:00.413066 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:00.413077 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:00.413088 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:00.413099 | orchestrator | 2026-04-07 00:37:00.413110 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-07 00:37:00.413121 | orchestrator | Tuesday 07 April 2026 00:36:43 +0000 (0:00:00.636) 0:06:31.853 ********* 2026-04-07 00:37:00.413132 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:00.413142 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:00.413153 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:00.413164 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:00.413174 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:00.413185 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:00.413195 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:00.413206 | orchestrator | 2026-04-07 00:37:00.413217 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-07 00:37:00.413236 | orchestrator | Tuesday 07 April 2026 00:36:43 +0000 (0:00:00.536) 0:06:32.389 ********* 2026-04-07 00:37:00.413247 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.413258 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.413269 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.413280 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.413291 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.413301 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.413312 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.413323 | orchestrator | 2026-04-07 00:37:00.413334 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-07 00:37:00.413345 | orchestrator | Tuesday 07 April 2026 00:36:45 +0000 (0:00:01.819) 0:06:34.209 ********* 2026-04-07 00:37:00.413356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:00.413370 | orchestrator | 2026-04-07 00:37:00.413381 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-07 00:37:00.413391 | orchestrator | Tuesday 07 April 2026 00:36:46 +0000 (0:00:00.814) 0:06:35.023 ********* 2026-04-07 00:37:00.413402 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.413413 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:00.413424 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:00.413434 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:00.413446 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:00.413457 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:00.413468 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:00.413478 | orchestrator | 2026-04-07 00:37:00.413489 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-07 00:37:00.413507 | orchestrator | Tuesday 07 April 2026 00:36:47 +0000 (0:00:00.875) 0:06:35.899 ********* 2026-04-07 00:37:00.413518 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.413529 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:00.413539 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:00.413550 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:00.413561 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:00.413571 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:00.413582 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:00.413593 | orchestrator | 2026-04-07 00:37:00.413604 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-07 00:37:00.413614 | orchestrator | Tuesday 07 April 2026 00:36:47 +0000 (0:00:00.769) 0:06:36.668 ********* 2026-04-07 00:37:00.413625 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.413636 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:00.413646 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:00.413657 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:00.413668 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:00.413678 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:00.413707 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:00.413718 | orchestrator | 2026-04-07 00:37:00.413729 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-07 00:37:00.413757 | orchestrator | Tuesday 07 April 2026 00:36:49 +0000 (0:00:01.297) 0:06:37.966 ********* 2026-04-07 00:37:00.413768 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:00.413779 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.413789 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.413800 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.413811 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.413822 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.413832 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.413843 | orchestrator | 2026-04-07 00:37:00.413854 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-07 00:37:00.413865 | orchestrator | Tuesday 07 April 2026 00:36:50 +0000 (0:00:01.290) 0:06:39.256 ********* 2026-04-07 00:37:00.413876 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.413886 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:00.413897 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:00.413908 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:00.413918 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:00.413929 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:00.413940 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:00.413950 | orchestrator | 2026-04-07 00:37:00.413961 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-07 00:37:00.413972 | orchestrator | Tuesday 07 April 2026 00:36:51 +0000 (0:00:01.340) 0:06:40.597 ********* 2026-04-07 00:37:00.413983 | orchestrator | changed: [testbed-manager] 2026-04-07 00:37:00.413993 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:00.414004 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:00.414094 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:00.414107 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:00.414118 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:00.414128 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:00.414183 | orchestrator | 2026-04-07 00:37:00.414196 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-07 00:37:00.414207 | orchestrator | Tuesday 07 April 2026 00:36:53 +0000 (0:00:01.578) 0:06:42.175 ********* 2026-04-07 00:37:00.414218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:00.414229 | orchestrator | 2026-04-07 00:37:00.414240 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-07 00:37:00.414251 | orchestrator | Tuesday 07 April 2026 00:36:54 +0000 (0:00:00.847) 0:06:43.023 ********* 2026-04-07 00:37:00.414276 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.414288 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.414298 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.414309 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.414320 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.414331 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.414341 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.414352 | orchestrator | 2026-04-07 00:37:00.414363 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-07 00:37:00.414374 | orchestrator | Tuesday 07 April 2026 00:36:55 +0000 (0:00:01.404) 0:06:44.427 ********* 2026-04-07 00:37:00.414385 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.414396 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.414407 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.414417 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.414428 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.414439 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.414450 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.414461 | orchestrator | 2026-04-07 00:37:00.414472 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-07 00:37:00.414482 | orchestrator | Tuesday 07 April 2026 00:36:56 +0000 (0:00:01.287) 0:06:45.715 ********* 2026-04-07 00:37:00.414493 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.414504 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.414515 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.414525 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.414536 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.414547 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.414558 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.414568 | orchestrator | 2026-04-07 00:37:00.414580 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-07 00:37:00.414590 | orchestrator | Tuesday 07 April 2026 00:36:58 +0000 (0:00:01.175) 0:06:46.891 ********* 2026-04-07 00:37:00.414601 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:00.414612 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:00.414623 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:00.414633 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:00.414644 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:00.414655 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:00.414665 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:00.414676 | orchestrator | 2026-04-07 00:37:00.414725 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-07 00:37:00.414737 | orchestrator | Tuesday 07 April 2026 00:36:59 +0000 (0:00:01.125) 0:06:48.016 ********* 2026-04-07 00:37:00.414748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:00.414759 | orchestrator | 2026-04-07 00:37:00.414770 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:00.414781 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.891) 0:06:48.908 ********* 2026-04-07 00:37:00.414792 | orchestrator | 2026-04-07 00:37:00.414803 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:00.414814 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.045) 0:06:48.954 ********* 2026-04-07 00:37:00.414824 | orchestrator | 2026-04-07 00:37:00.414835 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:00.414846 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.180) 0:06:49.134 ********* 2026-04-07 00:37:00.414857 | orchestrator | 2026-04-07 00:37:00.414868 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:00.414888 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.043) 0:06:49.178 ********* 2026-04-07 00:37:26.814291 | orchestrator | 2026-04-07 00:37:26.814412 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:26.814427 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.041) 0:06:49.219 ********* 2026-04-07 00:37:26.814438 | orchestrator | 2026-04-07 00:37:26.814448 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:26.814458 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.076) 0:06:49.295 ********* 2026-04-07 00:37:26.814467 | orchestrator | 2026-04-07 00:37:26.814477 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 00:37:26.814487 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.040) 0:06:49.335 ********* 2026-04-07 00:37:26.814496 | orchestrator | 2026-04-07 00:37:26.814506 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 00:37:26.814516 | orchestrator | Tuesday 07 April 2026 00:37:00 +0000 (0:00:00.044) 0:06:49.379 ********* 2026-04-07 00:37:26.814526 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:26.814536 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:26.814546 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:26.814556 | orchestrator | 2026-04-07 00:37:26.814565 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-07 00:37:26.814575 | orchestrator | Tuesday 07 April 2026 00:37:01 +0000 (0:00:01.306) 0:06:50.686 ********* 2026-04-07 00:37:26.814585 | orchestrator | changed: [testbed-manager] 2026-04-07 00:37:26.814595 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:26.814605 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:26.814614 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:26.814624 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:26.814634 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:26.814643 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:26.814652 | orchestrator | 2026-04-07 00:37:26.814662 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-07 00:37:26.814739 | orchestrator | Tuesday 07 April 2026 00:37:03 +0000 (0:00:01.283) 0:06:51.969 ********* 2026-04-07 00:37:26.814750 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:26.814760 | orchestrator | changed: [testbed-manager] 2026-04-07 00:37:26.814770 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:26.814780 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:26.814789 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:26.814799 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:26.814808 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:26.814818 | orchestrator | 2026-04-07 00:37:26.814828 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-07 00:37:26.814838 | orchestrator | Tuesday 07 April 2026 00:37:04 +0000 (0:00:01.149) 0:06:53.118 ********* 2026-04-07 00:37:26.814849 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:26.814861 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:26.814872 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:26.814883 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:26.814894 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:26.814905 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:26.814916 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:26.814927 | orchestrator | 2026-04-07 00:37:26.814952 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-07 00:37:26.814964 | orchestrator | Tuesday 07 April 2026 00:37:06 +0000 (0:00:02.292) 0:06:55.410 ********* 2026-04-07 00:37:26.814975 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:26.814987 | orchestrator | 2026-04-07 00:37:26.814998 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-07 00:37:26.815009 | orchestrator | Tuesday 07 April 2026 00:37:06 +0000 (0:00:00.092) 0:06:55.503 ********* 2026-04-07 00:37:26.815020 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.815032 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:26.815044 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:26.815055 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:26.815077 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:26.815088 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:26.815099 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:26.815110 | orchestrator | 2026-04-07 00:37:26.815122 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-07 00:37:26.815134 | orchestrator | Tuesday 07 April 2026 00:37:07 +0000 (0:00:01.165) 0:06:56.669 ********* 2026-04-07 00:37:26.815145 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:26.815157 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:26.815168 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:26.815180 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:26.815191 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:26.815203 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:26.815213 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:26.815223 | orchestrator | 2026-04-07 00:37:26.815232 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-07 00:37:26.815242 | orchestrator | Tuesday 07 April 2026 00:37:08 +0000 (0:00:00.521) 0:06:57.190 ********* 2026-04-07 00:37:26.815253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:26.815265 | orchestrator | 2026-04-07 00:37:26.815275 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-07 00:37:26.815285 | orchestrator | Tuesday 07 April 2026 00:37:09 +0000 (0:00:00.843) 0:06:58.034 ********* 2026-04-07 00:37:26.815295 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.815304 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:26.815314 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:26.815324 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:26.815334 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:26.815343 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:26.815353 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:26.815363 | orchestrator | 2026-04-07 00:37:26.815373 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-07 00:37:26.815382 | orchestrator | Tuesday 07 April 2026 00:37:10 +0000 (0:00:01.027) 0:06:59.062 ********* 2026-04-07 00:37:26.815392 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-07 00:37:26.815419 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-07 00:37:26.815429 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-07 00:37:26.815439 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-07 00:37:26.815449 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-07 00:37:26.815458 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-07 00:37:26.815468 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-07 00:37:26.815478 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-07 00:37:26.815488 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-07 00:37:26.815497 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-07 00:37:26.815507 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-07 00:37:26.815516 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-07 00:37:26.815526 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-07 00:37:26.815536 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-07 00:37:26.815545 | orchestrator | 2026-04-07 00:37:26.815555 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-07 00:37:26.815564 | orchestrator | Tuesday 07 April 2026 00:37:12 +0000 (0:00:02.515) 0:07:01.577 ********* 2026-04-07 00:37:26.815574 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:26.815584 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:26.815593 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:26.815609 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:26.815618 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:26.815628 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:26.815637 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:26.815647 | orchestrator | 2026-04-07 00:37:26.815657 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-07 00:37:26.815667 | orchestrator | Tuesday 07 April 2026 00:37:13 +0000 (0:00:00.462) 0:07:02.039 ********* 2026-04-07 00:37:26.815697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:26.815708 | orchestrator | 2026-04-07 00:37:26.815718 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-07 00:37:26.815727 | orchestrator | Tuesday 07 April 2026 00:37:14 +0000 (0:00:00.944) 0:07:02.984 ********* 2026-04-07 00:37:26.815737 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.815747 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:26.815756 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:26.815766 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:26.815776 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:26.815786 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:26.815795 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:26.815805 | orchestrator | 2026-04-07 00:37:26.815819 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-07 00:37:26.815830 | orchestrator | Tuesday 07 April 2026 00:37:15 +0000 (0:00:00.840) 0:07:03.825 ********* 2026-04-07 00:37:26.815839 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.815849 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:26.815859 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:26.815868 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:26.815878 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:26.815888 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:26.815898 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:26.815907 | orchestrator | 2026-04-07 00:37:26.815917 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-07 00:37:26.815927 | orchestrator | Tuesday 07 April 2026 00:37:15 +0000 (0:00:00.794) 0:07:04.619 ********* 2026-04-07 00:37:26.815937 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:26.815946 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:26.815956 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:26.815966 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:26.815976 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:26.815985 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:26.815995 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:26.816004 | orchestrator | 2026-04-07 00:37:26.816014 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-07 00:37:26.816024 | orchestrator | Tuesday 07 April 2026 00:37:16 +0000 (0:00:00.504) 0:07:05.124 ********* 2026-04-07 00:37:26.816034 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.816043 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:26.816053 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:26.816063 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:26.816073 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:26.816083 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:26.816092 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:26.816102 | orchestrator | 2026-04-07 00:37:26.816112 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-07 00:37:26.816122 | orchestrator | Tuesday 07 April 2026 00:37:17 +0000 (0:00:01.458) 0:07:06.582 ********* 2026-04-07 00:37:26.816131 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:26.816141 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:26.816151 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:26.816160 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:26.816175 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:26.816185 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:26.816195 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:26.816204 | orchestrator | 2026-04-07 00:37:26.816214 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-07 00:37:26.816224 | orchestrator | Tuesday 07 April 2026 00:37:18 +0000 (0:00:00.639) 0:07:07.222 ********* 2026-04-07 00:37:26.816233 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:26.816243 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:26.816253 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:26.816262 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:26.816272 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:26.816282 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:26.816297 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:58.660940 | orchestrator | 2026-04-07 00:37:58.661016 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-07 00:37:58.661027 | orchestrator | Tuesday 07 April 2026 00:37:26 +0000 (0:00:08.426) 0:07:15.648 ********* 2026-04-07 00:37:58.661035 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661042 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:58.661050 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:58.661057 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:58.661064 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:58.661071 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:58.661077 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:58.661084 | orchestrator | 2026-04-07 00:37:58.661091 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-07 00:37:58.661098 | orchestrator | Tuesday 07 April 2026 00:37:28 +0000 (0:00:01.234) 0:07:16.883 ********* 2026-04-07 00:37:58.661105 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661111 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:58.661118 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:58.661125 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:58.661131 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:58.661138 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:58.661145 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:58.661152 | orchestrator | 2026-04-07 00:37:58.661158 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-07 00:37:58.661165 | orchestrator | Tuesday 07 April 2026 00:37:29 +0000 (0:00:01.671) 0:07:18.554 ********* 2026-04-07 00:37:58.661172 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661178 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:58.661185 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:58.661191 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:58.661198 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:58.661205 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:58.661211 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:58.661218 | orchestrator | 2026-04-07 00:37:58.661224 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 00:37:58.661231 | orchestrator | Tuesday 07 April 2026 00:37:31 +0000 (0:00:01.765) 0:07:20.319 ********* 2026-04-07 00:37:58.661238 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661244 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661251 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661258 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661264 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661271 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661278 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661284 | orchestrator | 2026-04-07 00:37:58.661291 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 00:37:58.661298 | orchestrator | Tuesday 07 April 2026 00:37:32 +0000 (0:00:00.843) 0:07:21.163 ********* 2026-04-07 00:37:58.661304 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:58.661311 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:58.661336 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:58.661343 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:58.661350 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:58.661356 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:58.661364 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:58.661370 | orchestrator | 2026-04-07 00:37:58.661377 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-07 00:37:58.661384 | orchestrator | Tuesday 07 April 2026 00:37:33 +0000 (0:00:00.786) 0:07:21.949 ********* 2026-04-07 00:37:58.661391 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:58.661397 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:58.661404 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:58.661410 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:58.661417 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:58.661423 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:58.661430 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:58.661436 | orchestrator | 2026-04-07 00:37:58.661443 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-07 00:37:58.661450 | orchestrator | Tuesday 07 April 2026 00:37:33 +0000 (0:00:00.660) 0:07:22.610 ********* 2026-04-07 00:37:58.661456 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661463 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661470 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661476 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661483 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661489 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661496 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661502 | orchestrator | 2026-04-07 00:37:58.661509 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-07 00:37:58.661516 | orchestrator | Tuesday 07 April 2026 00:37:34 +0000 (0:00:00.481) 0:07:23.091 ********* 2026-04-07 00:37:58.661522 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661529 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661535 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661542 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661548 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661555 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661561 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661568 | orchestrator | 2026-04-07 00:37:58.661575 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-07 00:37:58.661581 | orchestrator | Tuesday 07 April 2026 00:37:34 +0000 (0:00:00.500) 0:07:23.592 ********* 2026-04-07 00:37:58.661588 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661595 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661601 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661608 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661614 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661621 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661627 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661634 | orchestrator | 2026-04-07 00:37:58.661640 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-07 00:37:58.661647 | orchestrator | Tuesday 07 April 2026 00:37:35 +0000 (0:00:00.493) 0:07:24.086 ********* 2026-04-07 00:37:58.661672 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661679 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661685 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661692 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661698 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661705 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661711 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661718 | orchestrator | 2026-04-07 00:37:58.661737 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-07 00:37:58.661744 | orchestrator | Tuesday 07 April 2026 00:37:40 +0000 (0:00:05.127) 0:07:29.213 ********* 2026-04-07 00:37:58.661751 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:37:58.661764 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:37:58.661784 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:37:58.661790 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:37:58.661797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:37:58.661804 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:37:58.661810 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:37:58.661817 | orchestrator | 2026-04-07 00:37:58.661824 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-07 00:37:58.661831 | orchestrator | Tuesday 07 April 2026 00:37:41 +0000 (0:00:00.664) 0:07:29.878 ********* 2026-04-07 00:37:58.661839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:58.661847 | orchestrator | 2026-04-07 00:37:58.661854 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-07 00:37:58.661861 | orchestrator | Tuesday 07 April 2026 00:37:41 +0000 (0:00:00.854) 0:07:30.732 ********* 2026-04-07 00:37:58.661867 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661874 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661881 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661887 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661894 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661900 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661907 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661913 | orchestrator | 2026-04-07 00:37:58.661920 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-07 00:37:58.661927 | orchestrator | Tuesday 07 April 2026 00:37:44 +0000 (0:00:02.080) 0:07:32.812 ********* 2026-04-07 00:37:58.661933 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.661940 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.661946 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.661953 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.661960 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.661966 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.661973 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.661979 | orchestrator | 2026-04-07 00:37:58.661986 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-07 00:37:58.661993 | orchestrator | Tuesday 07 April 2026 00:37:45 +0000 (0:00:01.205) 0:07:34.017 ********* 2026-04-07 00:37:58.661999 | orchestrator | ok: [testbed-manager] 2026-04-07 00:37:58.662006 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:37:58.662073 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:37:58.662082 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:37:58.662088 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:37:58.662095 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:37:58.662102 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:37:58.662108 | orchestrator | 2026-04-07 00:37:58.662115 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-07 00:37:58.662126 | orchestrator | Tuesday 07 April 2026 00:37:46 +0000 (0:00:00.805) 0:07:34.823 ********* 2026-04-07 00:37:58.662133 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662141 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662148 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662155 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662161 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662173 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662180 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 00:37:58.662187 | orchestrator | 2026-04-07 00:37:58.662193 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-07 00:37:58.662200 | orchestrator | Tuesday 07 April 2026 00:37:47 +0000 (0:00:01.632) 0:07:36.456 ********* 2026-04-07 00:37:58.662207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:37:58.662214 | orchestrator | 2026-04-07 00:37:58.662221 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-07 00:37:58.662228 | orchestrator | Tuesday 07 April 2026 00:37:48 +0000 (0:00:00.794) 0:07:37.250 ********* 2026-04-07 00:37:58.662234 | orchestrator | changed: [testbed-manager] 2026-04-07 00:37:58.662241 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:37:58.662248 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:37:58.662255 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:37:58.662261 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:37:58.662268 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:37:58.662274 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:37:58.662281 | orchestrator | 2026-04-07 00:37:58.662293 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-07 00:38:29.150320 | orchestrator | Tuesday 07 April 2026 00:37:58 +0000 (0:00:10.180) 0:07:47.431 ********* 2026-04-07 00:38:29.150419 | orchestrator | ok: [testbed-manager] 2026-04-07 00:38:29.150435 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:38:29.150447 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:38:29.150473 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:38:29.150484 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:38:29.150506 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:38:29.150517 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:38:29.150528 | orchestrator | 2026-04-07 00:38:29.150539 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-07 00:38:29.150551 | orchestrator | Tuesday 07 April 2026 00:38:00 +0000 (0:00:01.674) 0:07:49.106 ********* 2026-04-07 00:38:29.150561 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:38:29.150572 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:38:29.150583 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:38:29.150594 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:38:29.150605 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:38:29.150615 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:38:29.150626 | orchestrator | 2026-04-07 00:38:29.150688 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-07 00:38:29.150700 | orchestrator | Tuesday 07 April 2026 00:38:01 +0000 (0:00:01.422) 0:07:50.528 ********* 2026-04-07 00:38:29.150711 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.150723 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.150733 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.150744 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.150755 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.150766 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.150776 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.150787 | orchestrator | 2026-04-07 00:38:29.150798 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-07 00:38:29.150809 | orchestrator | 2026-04-07 00:38:29.150820 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-07 00:38:29.150831 | orchestrator | Tuesday 07 April 2026 00:38:02 +0000 (0:00:01.183) 0:07:51.711 ********* 2026-04-07 00:38:29.150841 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:38:29.150878 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:38:29.150889 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:38:29.150900 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:38:29.150910 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:38:29.150921 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:38:29.150931 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:38:29.150942 | orchestrator | 2026-04-07 00:38:29.150953 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-07 00:38:29.150963 | orchestrator | 2026-04-07 00:38:29.150974 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-07 00:38:29.150984 | orchestrator | Tuesday 07 April 2026 00:38:03 +0000 (0:00:00.532) 0:07:52.244 ********* 2026-04-07 00:38:29.150995 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.151006 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.151016 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.151028 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.151038 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.151063 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.151074 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.151085 | orchestrator | 2026-04-07 00:38:29.151096 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-07 00:38:29.151106 | orchestrator | Tuesday 07 April 2026 00:38:04 +0000 (0:00:01.347) 0:07:53.591 ********* 2026-04-07 00:38:29.151117 | orchestrator | ok: [testbed-manager] 2026-04-07 00:38:29.151128 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:38:29.151138 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:38:29.151149 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:38:29.151160 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:38:29.151170 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:38:29.151181 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:38:29.151191 | orchestrator | 2026-04-07 00:38:29.151202 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-07 00:38:29.151213 | orchestrator | Tuesday 07 April 2026 00:38:06 +0000 (0:00:01.509) 0:07:55.101 ********* 2026-04-07 00:38:29.151223 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:38:29.151234 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:38:29.151245 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:38:29.151255 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:38:29.151265 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:38:29.151276 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:38:29.151287 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:38:29.151297 | orchestrator | 2026-04-07 00:38:29.151308 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-07 00:38:29.151319 | orchestrator | Tuesday 07 April 2026 00:38:06 +0000 (0:00:00.465) 0:07:55.567 ********* 2026-04-07 00:38:29.151330 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:38:29.151342 | orchestrator | 2026-04-07 00:38:29.151352 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-07 00:38:29.151363 | orchestrator | Tuesday 07 April 2026 00:38:07 +0000 (0:00:00.796) 0:07:56.364 ********* 2026-04-07 00:38:29.151375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:38:29.151389 | orchestrator | 2026-04-07 00:38:29.151399 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-07 00:38:29.151410 | orchestrator | Tuesday 07 April 2026 00:38:08 +0000 (0:00:00.913) 0:07:57.277 ********* 2026-04-07 00:38:29.151421 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.151431 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.151442 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.151452 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.151471 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.151482 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.151492 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.151503 | orchestrator | 2026-04-07 00:38:29.151532 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-07 00:38:29.151544 | orchestrator | Tuesday 07 April 2026 00:38:18 +0000 (0:00:09.770) 0:08:07.048 ********* 2026-04-07 00:38:29.151555 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.151565 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.151576 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.151587 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.151597 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.151608 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.151619 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.151793 | orchestrator | 2026-04-07 00:38:29.151830 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-07 00:38:29.151841 | orchestrator | Tuesday 07 April 2026 00:38:19 +0000 (0:00:00.840) 0:08:07.888 ********* 2026-04-07 00:38:29.151852 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.151863 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.151874 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.151884 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.151894 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.151905 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.151915 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.151926 | orchestrator | 2026-04-07 00:38:29.151936 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-07 00:38:29.151947 | orchestrator | Tuesday 07 April 2026 00:38:20 +0000 (0:00:01.362) 0:08:09.250 ********* 2026-04-07 00:38:29.151958 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.151968 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.151979 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.151989 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.151999 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.152010 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.152020 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.152031 | orchestrator | 2026-04-07 00:38:29.152041 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-07 00:38:29.152052 | orchestrator | Tuesday 07 April 2026 00:38:22 +0000 (0:00:01.926) 0:08:11.177 ********* 2026-04-07 00:38:29.152062 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.152073 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.152084 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.152094 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.152105 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.152114 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.152124 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.152133 | orchestrator | 2026-04-07 00:38:29.152143 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-07 00:38:29.152153 | orchestrator | Tuesday 07 April 2026 00:38:23 +0000 (0:00:01.242) 0:08:12.419 ********* 2026-04-07 00:38:29.152162 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.152171 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.152181 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.152190 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.152199 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.152217 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.152227 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.152236 | orchestrator | 2026-04-07 00:38:29.152246 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-07 00:38:29.152255 | orchestrator | 2026-04-07 00:38:29.152265 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-07 00:38:29.152274 | orchestrator | Tuesday 07 April 2026 00:38:24 +0000 (0:00:01.165) 0:08:13.585 ********* 2026-04-07 00:38:29.152293 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:38:29.152303 | orchestrator | 2026-04-07 00:38:29.152312 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-07 00:38:29.152322 | orchestrator | Tuesday 07 April 2026 00:38:25 +0000 (0:00:00.898) 0:08:14.483 ********* 2026-04-07 00:38:29.152331 | orchestrator | ok: [testbed-manager] 2026-04-07 00:38:29.152341 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:38:29.152350 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:38:29.152360 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:38:29.152369 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:38:29.152378 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:38:29.152387 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:38:29.152397 | orchestrator | 2026-04-07 00:38:29.152406 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-07 00:38:29.152415 | orchestrator | Tuesday 07 April 2026 00:38:26 +0000 (0:00:00.808) 0:08:15.292 ********* 2026-04-07 00:38:29.152425 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:29.152434 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:29.152444 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:29.152453 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:29.152463 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:29.152472 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:29.152482 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:29.152491 | orchestrator | 2026-04-07 00:38:29.152500 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-07 00:38:29.152510 | orchestrator | Tuesday 07 April 2026 00:38:27 +0000 (0:00:01.166) 0:08:16.459 ********* 2026-04-07 00:38:29.152519 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:38:29.152529 | orchestrator | 2026-04-07 00:38:29.152538 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-07 00:38:29.152548 | orchestrator | Tuesday 07 April 2026 00:38:28 +0000 (0:00:00.722) 0:08:17.181 ********* 2026-04-07 00:38:29.152557 | orchestrator | ok: [testbed-manager] 2026-04-07 00:38:29.152566 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:38:29.152576 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:38:29.152585 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:38:29.152594 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:38:29.152604 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:38:29.152613 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:38:29.152622 | orchestrator | 2026-04-07 00:38:29.152679 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-07 00:38:30.491805 | orchestrator | Tuesday 07 April 2026 00:38:29 +0000 (0:00:00.736) 0:08:17.918 ********* 2026-04-07 00:38:30.491870 | orchestrator | changed: [testbed-manager] 2026-04-07 00:38:30.491877 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:38:30.491881 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:38:30.491885 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:38:30.491889 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:38:30.491893 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:38:30.491897 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:38:30.491901 | orchestrator | 2026-04-07 00:38:30.491905 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:38:30.491910 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-07 00:38:30.491916 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 00:38:30.491919 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 00:38:30.491940 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 00:38:30.491944 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 00:38:30.491948 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 00:38:30.491952 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 00:38:30.491955 | orchestrator | 2026-04-07 00:38:30.491959 | orchestrator | 2026-04-07 00:38:30.491963 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:38:30.491967 | orchestrator | Tuesday 07 April 2026 00:38:30 +0000 (0:00:01.162) 0:08:19.080 ********* 2026-04-07 00:38:30.491971 | orchestrator | =============================================================================== 2026-04-07 00:38:30.491975 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.81s 2026-04-07 00:38:30.491979 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.24s 2026-04-07 00:38:30.491982 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.42s 2026-04-07 00:38:30.491995 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.48s 2026-04-07 00:38:30.491999 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.75s 2026-04-07 00:38:30.492003 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.03s 2026-04-07 00:38:30.492007 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.93s 2026-04-07 00:38:30.492011 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.89s 2026-04-07 00:38:30.492015 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.18s 2026-04-07 00:38:30.492018 | orchestrator | osism.services.rng : Install rng package ------------------------------- 10.00s 2026-04-07 00:38:30.492022 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.77s 2026-04-07 00:38:30.492026 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.76s 2026-04-07 00:38:30.492030 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.11s 2026-04-07 00:38:30.492034 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.96s 2026-04-07 00:38:30.492037 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.73s 2026-04-07 00:38:30.492041 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.43s 2026-04-07 00:38:30.492045 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 8.21s 2026-04-07 00:38:30.492048 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.28s 2026-04-07 00:38:30.492052 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.69s 2026-04-07 00:38:30.492056 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.68s 2026-04-07 00:38:30.609392 | orchestrator | + osism apply fail2ban 2026-04-07 00:38:42.081590 | orchestrator | 2026-04-07 00:38:42 | INFO  | Prepare task for execution of fail2ban. 2026-04-07 00:38:42.156146 | orchestrator | 2026-04-07 00:38:42 | INFO  | Task 09aeb703-8cc2-48a9-8aea-ff27feb2aa17 (fail2ban) was prepared for execution. 2026-04-07 00:38:42.156225 | orchestrator | 2026-04-07 00:38:42 | INFO  | It takes a moment until task 09aeb703-8cc2-48a9-8aea-ff27feb2aa17 (fail2ban) has been started and output is visible here. 2026-04-07 00:39:03.121562 | orchestrator | 2026-04-07 00:39:03.121722 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-07 00:39:03.121767 | orchestrator | 2026-04-07 00:39:03.121779 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-07 00:39:03.121790 | orchestrator | Tuesday 07 April 2026 00:38:45 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-04-07 00:39:03.121803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:39:03.121815 | orchestrator | 2026-04-07 00:39:03.121826 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-07 00:39:03.121836 | orchestrator | Tuesday 07 April 2026 00:38:46 +0000 (0:00:01.059) 0:00:01.365 ********* 2026-04-07 00:39:03.121846 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:39:03.121857 | orchestrator | changed: [testbed-manager] 2026-04-07 00:39:03.121867 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:39:03.121878 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:39:03.121889 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:39:03.121901 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:39:03.121911 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:39:03.121921 | orchestrator | 2026-04-07 00:39:03.121931 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-07 00:39:03.121941 | orchestrator | Tuesday 07 April 2026 00:38:58 +0000 (0:00:12.280) 0:00:13.646 ********* 2026-04-07 00:39:03.121951 | orchestrator | changed: [testbed-manager] 2026-04-07 00:39:03.121961 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:39:03.121971 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:39:03.121982 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:39:03.121992 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:39:03.122003 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:39:03.122013 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:39:03.122081 | orchestrator | 2026-04-07 00:39:03.122093 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-07 00:39:03.122103 | orchestrator | Tuesday 07 April 2026 00:39:00 +0000 (0:00:01.518) 0:00:15.164 ********* 2026-04-07 00:39:03.122113 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:03.122124 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:03.122134 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:03.122144 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:03.122159 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:03.122169 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:03.122178 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:03.122188 | orchestrator | 2026-04-07 00:39:03.122197 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-07 00:39:03.122207 | orchestrator | Tuesday 07 April 2026 00:39:01 +0000 (0:00:01.187) 0:00:16.352 ********* 2026-04-07 00:39:03.122216 | orchestrator | changed: [testbed-manager] 2026-04-07 00:39:03.122226 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:39:03.122236 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:39:03.122245 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:39:03.122256 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:39:03.122266 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:39:03.122275 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:39:03.122284 | orchestrator | 2026-04-07 00:39:03.122294 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:39:03.122320 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122334 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122344 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122353 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122373 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122382 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122391 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:39:03.122402 | orchestrator | 2026-04-07 00:39:03.122413 | orchestrator | 2026-04-07 00:39:03.122423 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:39:03.122431 | orchestrator | Tuesday 07 April 2026 00:39:02 +0000 (0:00:01.536) 0:00:17.888 ********* 2026-04-07 00:39:03.122441 | orchestrator | =============================================================================== 2026-04-07 00:39:03.122451 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.28s 2026-04-07 00:39:03.122462 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.54s 2026-04-07 00:39:03.122471 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-04-07 00:39:03.122481 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.19s 2026-04-07 00:39:03.122490 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.06s 2026-04-07 00:39:03.240053 | orchestrator | + osism apply network 2026-04-07 00:39:14.427037 | orchestrator | 2026-04-07 00:39:14 | INFO  | Prepare task for execution of network. 2026-04-07 00:39:14.501890 | orchestrator | 2026-04-07 00:39:14 | INFO  | Task cbd1d17b-d0c3-4c5e-877a-f120ff312997 (network) was prepared for execution. 2026-04-07 00:39:14.501970 | orchestrator | 2026-04-07 00:39:14 | INFO  | It takes a moment until task cbd1d17b-d0c3-4c5e-877a-f120ff312997 (network) has been started and output is visible here. 2026-04-07 00:39:42.180791 | orchestrator | 2026-04-07 00:39:42.180875 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-07 00:39:42.180885 | orchestrator | 2026-04-07 00:39:42.180893 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-07 00:39:42.180900 | orchestrator | Tuesday 07 April 2026 00:39:18 +0000 (0:00:00.356) 0:00:00.356 ********* 2026-04-07 00:39:42.180907 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.180914 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.180920 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.180926 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.180932 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.180938 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.180944 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.180950 | orchestrator | 2026-04-07 00:39:42.180956 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-07 00:39:42.180962 | orchestrator | Tuesday 07 April 2026 00:39:18 +0000 (0:00:00.555) 0:00:00.911 ********* 2026-04-07 00:39:42.180970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:39:42.180978 | orchestrator | 2026-04-07 00:39:42.180984 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-07 00:39:42.180990 | orchestrator | Tuesday 07 April 2026 00:39:19 +0000 (0:00:01.018) 0:00:01.930 ********* 2026-04-07 00:39:42.180996 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181002 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.181008 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.181013 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.181019 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.181025 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.181049 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.181055 | orchestrator | 2026-04-07 00:39:42.181061 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-07 00:39:42.181068 | orchestrator | Tuesday 07 April 2026 00:39:22 +0000 (0:00:02.415) 0:00:04.345 ********* 2026-04-07 00:39:42.181077 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181086 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.181092 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.181098 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.181104 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.181109 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.181115 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.181121 | orchestrator | 2026-04-07 00:39:42.181126 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-07 00:39:42.181132 | orchestrator | Tuesday 07 April 2026 00:39:23 +0000 (0:00:01.540) 0:00:05.886 ********* 2026-04-07 00:39:42.181138 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-07 00:39:42.181144 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-07 00:39:42.181149 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-07 00:39:42.181155 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-07 00:39:42.181161 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-07 00:39:42.181167 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-07 00:39:42.181173 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-07 00:39:42.181179 | orchestrator | 2026-04-07 00:39:42.181185 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-07 00:39:42.181191 | orchestrator | Tuesday 07 April 2026 00:39:24 +0000 (0:00:01.044) 0:00:06.931 ********* 2026-04-07 00:39:42.181197 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:39:42.181203 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:42.181209 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:42.181215 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:42.181220 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:42.181226 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:42.181232 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:42.181238 | orchestrator | 2026-04-07 00:39:42.181244 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-07 00:39:42.181251 | orchestrator | Tuesday 07 April 2026 00:39:25 +0000 (0:00:00.567) 0:00:07.498 ********* 2026-04-07 00:39:42.181256 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:39:42.181262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:42.181268 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:42.181273 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:42.181279 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:42.181285 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:42.181290 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:42.181296 | orchestrator | 2026-04-07 00:39:42.181302 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-07 00:39:42.181308 | orchestrator | Tuesday 07 April 2026 00:39:26 +0000 (0:00:00.701) 0:00:08.200 ********* 2026-04-07 00:39:42.181313 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:39:42.181319 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:42.181325 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:42.181330 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:42.181336 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:42.181355 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:42.181363 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:42.181369 | orchestrator | 2026-04-07 00:39:42.181376 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-07 00:39:42.181382 | orchestrator | Tuesday 07 April 2026 00:39:26 +0000 (0:00:00.738) 0:00:08.939 ********* 2026-04-07 00:39:42.181389 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 00:39:42.181399 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:39:42.181405 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 00:39:42.181412 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 00:39:42.181418 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 00:39:42.181424 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 00:39:42.181431 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 00:39:42.181438 | orchestrator | 2026-04-07 00:39:42.181456 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-07 00:39:42.181463 | orchestrator | Tuesday 07 April 2026 00:39:29 +0000 (0:00:02.981) 0:00:11.920 ********* 2026-04-07 00:39:42.181470 | orchestrator | changed: [testbed-manager] 2026-04-07 00:39:42.181476 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:39:42.181483 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:39:42.181490 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:39:42.181496 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:39:42.181503 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:39:42.181509 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:39:42.181516 | orchestrator | 2026-04-07 00:39:42.181522 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-07 00:39:42.181529 | orchestrator | Tuesday 07 April 2026 00:39:31 +0000 (0:00:01.508) 0:00:13.429 ********* 2026-04-07 00:39:42.181536 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:39:42.181543 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 00:39:42.181549 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 00:39:42.181556 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 00:39:42.181562 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 00:39:42.181569 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 00:39:42.181576 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 00:39:42.181600 | orchestrator | 2026-04-07 00:39:42.181607 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-07 00:39:42.181614 | orchestrator | Tuesday 07 April 2026 00:39:32 +0000 (0:00:01.620) 0:00:15.050 ********* 2026-04-07 00:39:42.181620 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181627 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.181634 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.181640 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.181647 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.181654 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.181661 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.181667 | orchestrator | 2026-04-07 00:39:42.181674 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-07 00:39:42.181681 | orchestrator | Tuesday 07 April 2026 00:39:33 +0000 (0:00:00.994) 0:00:16.044 ********* 2026-04-07 00:39:42.181688 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:39:42.181695 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:42.181701 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:42.181708 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:42.181714 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:42.181721 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:42.181727 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:42.181733 | orchestrator | 2026-04-07 00:39:42.181738 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-07 00:39:42.181744 | orchestrator | Tuesday 07 April 2026 00:39:34 +0000 (0:00:00.622) 0:00:16.666 ********* 2026-04-07 00:39:42.181750 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181756 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.181761 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.181767 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.181773 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.181779 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.181788 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.181794 | orchestrator | 2026-04-07 00:39:42.181804 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-07 00:39:42.181810 | orchestrator | Tuesday 07 April 2026 00:39:36 +0000 (0:00:02.216) 0:00:18.883 ********* 2026-04-07 00:39:42.181816 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:42.181822 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:42.181827 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:42.181833 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:42.181839 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:42.181844 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:42.181850 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-07 00:39:42.181857 | orchestrator | 2026-04-07 00:39:42.181863 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-07 00:39:42.181869 | orchestrator | Tuesday 07 April 2026 00:39:37 +0000 (0:00:00.868) 0:00:19.751 ********* 2026-04-07 00:39:42.181874 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181880 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:39:42.181886 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:39:42.181891 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:39:42.181897 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:39:42.181903 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:39:42.181908 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:39:42.181914 | orchestrator | 2026-04-07 00:39:42.181920 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-07 00:39:42.181926 | orchestrator | Tuesday 07 April 2026 00:39:39 +0000 (0:00:01.606) 0:00:21.357 ********* 2026-04-07 00:39:42.181932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:39:42.181939 | orchestrator | 2026-04-07 00:39:42.181945 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-07 00:39:42.181951 | orchestrator | Tuesday 07 April 2026 00:39:40 +0000 (0:00:01.197) 0:00:22.555 ********* 2026-04-07 00:39:42.181957 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.181962 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.181968 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.181974 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.181980 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.181985 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:42.181991 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:42.181997 | orchestrator | 2026-04-07 00:39:42.182003 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-07 00:39:42.182009 | orchestrator | Tuesday 07 April 2026 00:39:41 +0000 (0:00:01.186) 0:00:23.742 ********* 2026-04-07 00:39:42.182053 | orchestrator | ok: [testbed-manager] 2026-04-07 00:39:42.182060 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:39:42.182066 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:39:42.182072 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:39:42.182077 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:39:42.182138 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:39:57.476399 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:39:57.476500 | orchestrator | 2026-04-07 00:39:57.476516 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-07 00:39:57.476529 | orchestrator | Tuesday 07 April 2026 00:39:42 +0000 (0:00:00.682) 0:00:24.424 ********* 2026-04-07 00:39:57.476539 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476550 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476560 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476622 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476636 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476673 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476684 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476693 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476703 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476713 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476723 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476732 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 00:39:57.476742 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476752 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 00:39:57.476761 | orchestrator | 2026-04-07 00:39:57.476771 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-07 00:39:57.476781 | orchestrator | Tuesday 07 April 2026 00:39:43 +0000 (0:00:01.277) 0:00:25.702 ********* 2026-04-07 00:39:57.476791 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:39:57.476801 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:39:57.476810 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:39:57.476820 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:39:57.476829 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:39:57.476839 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:39:57.476849 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:39:57.476858 | orchestrator | 2026-04-07 00:39:57.476868 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-07 00:39:57.476878 | orchestrator | Tuesday 07 April 2026 00:39:44 +0000 (0:00:00.634) 0:00:26.336 ********* 2026-04-07 00:39:57.476904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-4, testbed-node-2, testbed-node-3, testbed-node-5 2026-04-07 00:39:57.476917 | orchestrator | 2026-04-07 00:39:57.476927 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-07 00:39:57.476937 | orchestrator | Tuesday 07 April 2026 00:39:48 +0000 (0:00:04.270) 0:00:30.607 ********* 2026-04-07 00:39:57.476948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.476959 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-07 00:39:57.476970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.476980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-07 00:39:57.477026 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-07 00:39:57.477037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477073 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-07 00:39:57.477115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-07 00:39:57.477133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-07 00:39:57.477149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-07 00:39:57.477164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-07 00:39:57.477180 | orchestrator | 2026-04-07 00:39:57.477205 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-07 00:39:57.477221 | orchestrator | Tuesday 07 April 2026 00:39:53 +0000 (0:00:05.035) 0:00:35.642 ********* 2026-04-07 00:39:57.477238 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-07 00:39:57.477254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477274 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-07 00:39:57.477284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:39:57.477329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-07 00:40:07.553923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-07 00:40:07.554103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-07 00:40:07.554124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-07 00:40:07.554136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-07 00:40:07.554147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-07 00:40:07.554159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-07 00:40:07.554171 | orchestrator | 2026-04-07 00:40:07.554184 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-07 00:40:07.554196 | orchestrator | Tuesday 07 April 2026 00:39:58 +0000 (0:00:04.959) 0:00:40.601 ********* 2026-04-07 00:40:07.554225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:40:07.554237 | orchestrator | 2026-04-07 00:40:07.554248 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-07 00:40:07.554259 | orchestrator | Tuesday 07 April 2026 00:39:59 +0000 (0:00:01.089) 0:00:41.691 ********* 2026-04-07 00:40:07.554270 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:07.554282 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:40:07.554293 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:40:07.554305 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:40:07.554316 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:40:07.554327 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:40:07.554337 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:40:07.554374 | orchestrator | 2026-04-07 00:40:07.554386 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-07 00:40:07.554397 | orchestrator | Tuesday 07 April 2026 00:40:00 +0000 (0:00:00.873) 0:00:42.565 ********* 2026-04-07 00:40:07.554408 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554419 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554430 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554441 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554453 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554466 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554480 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554493 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554506 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:07.554520 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554533 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554546 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554559 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554599 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:07.554612 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554626 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554638 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554670 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554684 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:07.554697 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554710 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554722 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554735 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554748 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:07.554762 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554774 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554788 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554800 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:07.554813 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554824 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:07.554835 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 00:40:07.554845 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 00:40:07.554856 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 00:40:07.554867 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 00:40:07.554877 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:07.554888 | orchestrator | 2026-04-07 00:40:07.554899 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-07 00:40:07.554919 | orchestrator | Tuesday 07 April 2026 00:40:01 +0000 (0:00:00.831) 0:00:43.396 ********* 2026-04-07 00:40:07.554930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:40:07.554942 | orchestrator | 2026-04-07 00:40:07.554953 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-07 00:40:07.554963 | orchestrator | Tuesday 07 April 2026 00:40:02 +0000 (0:00:01.112) 0:00:44.509 ********* 2026-04-07 00:40:07.554974 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:07.554991 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:07.555002 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:07.555013 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:07.555024 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:07.555035 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:07.555045 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:07.555056 | orchestrator | 2026-04-07 00:40:07.555067 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-07 00:40:07.555077 | orchestrator | Tuesday 07 April 2026 00:40:02 +0000 (0:00:00.567) 0:00:45.076 ********* 2026-04-07 00:40:07.555088 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:07.555099 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:07.555109 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:07.555120 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:07.555131 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:07.555141 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:07.555152 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:07.555163 | orchestrator | 2026-04-07 00:40:07.555174 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-07 00:40:07.555184 | orchestrator | Tuesday 07 April 2026 00:40:03 +0000 (0:00:00.658) 0:00:45.735 ********* 2026-04-07 00:40:07.555195 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:07.555205 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:07.555216 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:07.555226 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:07.555237 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:07.555247 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:07.555258 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:07.555269 | orchestrator | 2026-04-07 00:40:07.555279 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-07 00:40:07.555290 | orchestrator | Tuesday 07 April 2026 00:40:04 +0000 (0:00:00.535) 0:00:46.270 ********* 2026-04-07 00:40:07.555301 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:40:07.555311 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:07.555322 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:40:07.555333 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:40:07.555344 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:40:07.555354 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:40:07.555365 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:40:07.555376 | orchestrator | 2026-04-07 00:40:07.555387 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-07 00:40:07.555397 | orchestrator | Tuesday 07 April 2026 00:40:05 +0000 (0:00:01.590) 0:00:47.860 ********* 2026-04-07 00:40:07.555408 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:07.555419 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:40:07.555429 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:40:07.555440 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:40:07.555450 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:40:07.555461 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:40:07.555472 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:40:07.555482 | orchestrator | 2026-04-07 00:40:07.555493 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-07 00:40:07.555504 | orchestrator | Tuesday 07 April 2026 00:40:06 +0000 (0:00:00.996) 0:00:48.856 ********* 2026-04-07 00:40:07.555521 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:07.555532 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:40:07.555543 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:40:07.555553 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:40:07.555583 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:40:07.555595 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:40:07.555613 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:40:09.948624 | orchestrator | 2026-04-07 00:40:09.948720 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-07 00:40:09.948735 | orchestrator | Tuesday 07 April 2026 00:40:08 +0000 (0:00:01.865) 0:00:50.721 ********* 2026-04-07 00:40:09.948746 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:09.948757 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:09.948767 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:09.948777 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:09.948786 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:09.948796 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:09.948805 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:09.948814 | orchestrator | 2026-04-07 00:40:09.948824 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-07 00:40:09.948834 | orchestrator | Tuesday 07 April 2026 00:40:09 +0000 (0:00:00.637) 0:00:51.359 ********* 2026-04-07 00:40:09.948844 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:40:09.948853 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:40:09.948863 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:40:09.948872 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:40:09.948881 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:40:09.948891 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:40:09.948900 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:40:09.948909 | orchestrator | 2026-04-07 00:40:09.948919 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:40:09.948930 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 00:40:09.948941 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.948951 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.948960 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.948970 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.948979 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.948989 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 00:40:09.949003 | orchestrator | 2026-04-07 00:40:09.949013 | orchestrator | 2026-04-07 00:40:09.949023 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:40:09.949033 | orchestrator | Tuesday 07 April 2026 00:40:09 +0000 (0:00:00.472) 0:00:51.831 ********* 2026-04-07 00:40:09.949043 | orchestrator | =============================================================================== 2026-04-07 00:40:09.949052 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.04s 2026-04-07 00:40:09.949062 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.96s 2026-04-07 00:40:09.949071 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.27s 2026-04-07 00:40:09.949109 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.98s 2026-04-07 00:40:09.949119 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.42s 2026-04-07 00:40:09.949130 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.22s 2026-04-07 00:40:09.949142 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.87s 2026-04-07 00:40:09.949153 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.62s 2026-04-07 00:40:09.949165 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2026-04-07 00:40:09.949177 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.59s 2026-04-07 00:40:09.949188 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.54s 2026-04-07 00:40:09.949200 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.51s 2026-04-07 00:40:09.949212 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2026-04-07 00:40:09.949223 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2026-04-07 00:40:09.949234 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2026-04-07 00:40:09.949246 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.11s 2026-04-07 00:40:09.949257 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2026-04-07 00:40:09.949268 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-04-07 00:40:09.949280 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.02s 2026-04-07 00:40:09.949291 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.00s 2026-04-07 00:40:10.117871 | orchestrator | + osism apply wireguard 2026-04-07 00:40:21.454423 | orchestrator | 2026-04-07 00:40:21 | INFO  | Prepare task for execution of wireguard. 2026-04-07 00:40:21.523721 | orchestrator | 2026-04-07 00:40:21 | INFO  | Task e65eee01-7559-49e0-bdbf-e38f240f2353 (wireguard) was prepared for execution. 2026-04-07 00:40:21.523851 | orchestrator | 2026-04-07 00:40:21 | INFO  | It takes a moment until task e65eee01-7559-49e0-bdbf-e38f240f2353 (wireguard) has been started and output is visible here. 2026-04-07 00:40:38.304228 | orchestrator | 2026-04-07 00:40:38.304340 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-07 00:40:38.304357 | orchestrator | 2026-04-07 00:40:38.304370 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-07 00:40:38.304382 | orchestrator | Tuesday 07 April 2026 00:40:24 +0000 (0:00:00.263) 0:00:00.263 ********* 2026-04-07 00:40:38.304394 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:38.304406 | orchestrator | 2026-04-07 00:40:38.304418 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-07 00:40:38.304429 | orchestrator | Tuesday 07 April 2026 00:40:25 +0000 (0:00:01.526) 0:00:01.790 ********* 2026-04-07 00:40:38.304440 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304452 | orchestrator | 2026-04-07 00:40:38.304463 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-07 00:40:38.304474 | orchestrator | Tuesday 07 April 2026 00:40:31 +0000 (0:00:05.352) 0:00:07.142 ********* 2026-04-07 00:40:38.304485 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304496 | orchestrator | 2026-04-07 00:40:38.304507 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-07 00:40:38.304518 | orchestrator | Tuesday 07 April 2026 00:40:31 +0000 (0:00:00.508) 0:00:07.650 ********* 2026-04-07 00:40:38.304529 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304540 | orchestrator | 2026-04-07 00:40:38.304595 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-07 00:40:38.304615 | orchestrator | Tuesday 07 April 2026 00:40:32 +0000 (0:00:00.358) 0:00:08.009 ********* 2026-04-07 00:40:38.304667 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:38.304680 | orchestrator | 2026-04-07 00:40:38.304710 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-07 00:40:38.304722 | orchestrator | Tuesday 07 April 2026 00:40:32 +0000 (0:00:00.479) 0:00:08.489 ********* 2026-04-07 00:40:38.304733 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:38.304745 | orchestrator | 2026-04-07 00:40:38.304757 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-07 00:40:38.304769 | orchestrator | Tuesday 07 April 2026 00:40:33 +0000 (0:00:00.373) 0:00:08.862 ********* 2026-04-07 00:40:38.304782 | orchestrator | ok: [testbed-manager] 2026-04-07 00:40:38.304794 | orchestrator | 2026-04-07 00:40:38.304807 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-07 00:40:38.304825 | orchestrator | Tuesday 07 April 2026 00:40:33 +0000 (0:00:00.378) 0:00:09.240 ********* 2026-04-07 00:40:38.304838 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304850 | orchestrator | 2026-04-07 00:40:38.304862 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-07 00:40:38.304875 | orchestrator | Tuesday 07 April 2026 00:40:34 +0000 (0:00:01.154) 0:00:10.395 ********* 2026-04-07 00:40:38.304888 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 00:40:38.304900 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304913 | orchestrator | 2026-04-07 00:40:38.304925 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-07 00:40:38.304936 | orchestrator | Tuesday 07 April 2026 00:40:35 +0000 (0:00:00.939) 0:00:11.334 ********* 2026-04-07 00:40:38.304947 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.304957 | orchestrator | 2026-04-07 00:40:38.304968 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-07 00:40:38.304979 | orchestrator | Tuesday 07 April 2026 00:40:37 +0000 (0:00:01.794) 0:00:13.129 ********* 2026-04-07 00:40:38.304990 | orchestrator | changed: [testbed-manager] 2026-04-07 00:40:38.305001 | orchestrator | 2026-04-07 00:40:38.305011 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:40:38.305023 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:40:38.305035 | orchestrator | 2026-04-07 00:40:38.305046 | orchestrator | 2026-04-07 00:40:38.305057 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:40:38.305067 | orchestrator | Tuesday 07 April 2026 00:40:38 +0000 (0:00:00.841) 0:00:13.970 ********* 2026-04-07 00:40:38.305078 | orchestrator | =============================================================================== 2026-04-07 00:40:38.305089 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.35s 2026-04-07 00:40:38.305100 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2026-04-07 00:40:38.305111 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.53s 2026-04-07 00:40:38.305122 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2026-04-07 00:40:38.305132 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2026-04-07 00:40:38.305143 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-04-07 00:40:38.305154 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2026-04-07 00:40:38.305165 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2026-04-07 00:40:38.305175 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2026-04-07 00:40:38.305187 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-04-07 00:40:38.305198 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.36s 2026-04-07 00:40:38.419926 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-07 00:40:38.451437 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-07 00:40:38.451591 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-07 00:40:38.530135 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 177 0 --:--:-- --:--:-- --:--:-- 179 2026-04-07 00:40:38.542226 | orchestrator | + osism apply --environment custom workarounds 2026-04-07 00:40:39.650566 | orchestrator | 2026-04-07 00:40:39 | INFO  | Trying to run play workarounds in environment custom 2026-04-07 00:40:49.682732 | orchestrator | 2026-04-07 00:40:49 | INFO  | Prepare task for execution of workarounds. 2026-04-07 00:40:49.761769 | orchestrator | 2026-04-07 00:40:49 | INFO  | Task 59912208-9170-4f49-b04c-78cf0814a54c (workarounds) was prepared for execution. 2026-04-07 00:40:49.761891 | orchestrator | 2026-04-07 00:40:49 | INFO  | It takes a moment until task 59912208-9170-4f49-b04c-78cf0814a54c (workarounds) has been started and output is visible here. 2026-04-07 00:41:12.767924 | orchestrator | 2026-04-07 00:41:12.768003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:41:12.768013 | orchestrator | 2026-04-07 00:41:12.768019 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-07 00:41:12.768026 | orchestrator | Tuesday 07 April 2026 00:40:52 +0000 (0:00:00.162) 0:00:00.162 ********* 2026-04-07 00:41:12.768033 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768039 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768046 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768051 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768057 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768063 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768069 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-07 00:41:12.768075 | orchestrator | 2026-04-07 00:41:12.768086 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-07 00:41:12.768095 | orchestrator | 2026-04-07 00:41:12.768105 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-07 00:41:12.768116 | orchestrator | Tuesday 07 April 2026 00:40:53 +0000 (0:00:00.624) 0:00:00.786 ********* 2026-04-07 00:41:12.768131 | orchestrator | ok: [testbed-manager] 2026-04-07 00:41:12.768138 | orchestrator | 2026-04-07 00:41:12.768144 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-07 00:41:12.768150 | orchestrator | 2026-04-07 00:41:12.768156 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-07 00:41:12.768161 | orchestrator | Tuesday 07 April 2026 00:40:55 +0000 (0:00:02.442) 0:00:03.228 ********* 2026-04-07 00:41:12.768167 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:41:12.768173 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:41:12.768179 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:41:12.768185 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:41:12.768190 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:41:12.768196 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:41:12.768202 | orchestrator | 2026-04-07 00:41:12.768207 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-07 00:41:12.768213 | orchestrator | 2026-04-07 00:41:12.768219 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-07 00:41:12.768225 | orchestrator | Tuesday 07 April 2026 00:40:58 +0000 (0:00:02.201) 0:00:05.430 ********* 2026-04-07 00:41:12.768231 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768238 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768262 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768273 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768281 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768287 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 00:41:12.768293 | orchestrator | 2026-04-07 00:41:12.768299 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-07 00:41:12.768304 | orchestrator | Tuesday 07 April 2026 00:40:59 +0000 (0:00:01.287) 0:00:06.717 ********* 2026-04-07 00:41:12.768310 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:41:12.768316 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:41:12.768322 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:41:12.768328 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:41:12.768333 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:41:12.768339 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:41:12.768345 | orchestrator | 2026-04-07 00:41:12.768350 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-07 00:41:12.768359 | orchestrator | Tuesday 07 April 2026 00:41:02 +0000 (0:00:03.485) 0:00:10.203 ********* 2026-04-07 00:41:12.768369 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:41:12.768382 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:41:12.768393 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:41:12.768402 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:41:12.768411 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:41:12.768419 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:41:12.768428 | orchestrator | 2026-04-07 00:41:12.768436 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-07 00:41:12.768445 | orchestrator | 2026-04-07 00:41:12.768453 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-07 00:41:12.768462 | orchestrator | Tuesday 07 April 2026 00:41:03 +0000 (0:00:00.477) 0:00:10.680 ********* 2026-04-07 00:41:12.768470 | orchestrator | changed: [testbed-manager] 2026-04-07 00:41:12.768477 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:41:12.768485 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:41:12.768494 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:41:12.768502 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:41:12.768511 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:41:12.768542 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:41:12.768552 | orchestrator | 2026-04-07 00:41:12.768562 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-07 00:41:12.768571 | orchestrator | Tuesday 07 April 2026 00:41:04 +0000 (0:00:01.602) 0:00:12.283 ********* 2026-04-07 00:41:12.768580 | orchestrator | changed: [testbed-manager] 2026-04-07 00:41:12.768589 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:41:12.768598 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:41:12.768607 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:41:12.768616 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:41:12.768625 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:41:12.768687 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:41:12.768696 | orchestrator | 2026-04-07 00:41:12.768703 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-07 00:41:12.768710 | orchestrator | Tuesday 07 April 2026 00:41:06 +0000 (0:00:01.367) 0:00:13.651 ********* 2026-04-07 00:41:12.768717 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:41:12.768724 | orchestrator | ok: [testbed-manager] 2026-04-07 00:41:12.768731 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:41:12.768737 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:41:12.768744 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:41:12.768751 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:41:12.768757 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:41:12.768772 | orchestrator | 2026-04-07 00:41:12.768779 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-07 00:41:12.768786 | orchestrator | Tuesday 07 April 2026 00:41:07 +0000 (0:00:01.576) 0:00:15.228 ********* 2026-04-07 00:41:12.768794 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:41:12.768804 | orchestrator | changed: [testbed-manager] 2026-04-07 00:41:12.768816 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:41:12.768830 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:41:12.768839 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:41:12.768849 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:41:12.768858 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:41:12.768866 | orchestrator | 2026-04-07 00:41:12.768875 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-07 00:41:12.768885 | orchestrator | Tuesday 07 April 2026 00:41:09 +0000 (0:00:01.488) 0:00:16.716 ********* 2026-04-07 00:41:12.768894 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:41:12.768909 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:41:12.768919 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:41:12.768927 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:41:12.768933 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:41:12.768938 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:41:12.768944 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:41:12.768951 | orchestrator | 2026-04-07 00:41:12.768960 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-07 00:41:12.768975 | orchestrator | 2026-04-07 00:41:12.768985 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-07 00:41:12.768995 | orchestrator | Tuesday 07 April 2026 00:41:10 +0000 (0:00:00.651) 0:00:17.368 ********* 2026-04-07 00:41:12.769004 | orchestrator | ok: [testbed-manager] 2026-04-07 00:41:12.769012 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:41:12.769020 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:41:12.769028 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:41:12.769036 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:41:12.769046 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:41:12.769055 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:41:12.769063 | orchestrator | 2026-04-07 00:41:12.769073 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:41:12.769084 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:41:12.769094 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769100 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769106 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769112 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769117 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769123 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:12.769129 | orchestrator | 2026-04-07 00:41:12.769135 | orchestrator | 2026-04-07 00:41:12.769140 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:41:12.769146 | orchestrator | Tuesday 07 April 2026 00:41:12 +0000 (0:00:02.695) 0:00:20.063 ********* 2026-04-07 00:41:12.769152 | orchestrator | =============================================================================== 2026-04-07 00:41:12.769163 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.49s 2026-04-07 00:41:12.769169 | orchestrator | Install python3-docker -------------------------------------------------- 2.70s 2026-04-07 00:41:12.769175 | orchestrator | Apply netplan configuration --------------------------------------------- 2.44s 2026-04-07 00:41:12.769181 | orchestrator | Apply netplan configuration --------------------------------------------- 2.20s 2026-04-07 00:41:12.769186 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-04-07 00:41:12.769192 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2026-04-07 00:41:12.769198 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.49s 2026-04-07 00:41:12.769203 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.37s 2026-04-07 00:41:12.769209 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.29s 2026-04-07 00:41:12.769215 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-04-07 00:41:12.769221 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.62s 2026-04-07 00:41:12.769233 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.48s 2026-04-07 00:41:13.094992 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-07 00:41:24.334933 | orchestrator | 2026-04-07 00:41:24 | INFO  | Prepare task for execution of reboot. 2026-04-07 00:41:24.404568 | orchestrator | 2026-04-07 00:41:24 | INFO  | Task 0ef6cd05-a4ff-45b4-b9dc-f06c0d4d7460 (reboot) was prepared for execution. 2026-04-07 00:41:24.404670 | orchestrator | 2026-04-07 00:41:24 | INFO  | It takes a moment until task 0ef6cd05-a4ff-45b4-b9dc-f06c0d4d7460 (reboot) has been started and output is visible here. 2026-04-07 00:41:35.063319 | orchestrator | 2026-04-07 00:41:35.063472 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.063499 | orchestrator | 2026-04-07 00:41:35.063561 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.063574 | orchestrator | Tuesday 07 April 2026 00:41:27 +0000 (0:00:00.238) 0:00:00.238 ********* 2026-04-07 00:41:35.063585 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:41:35.063598 | orchestrator | 2026-04-07 00:41:35.063609 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.063620 | orchestrator | Tuesday 07 April 2026 00:41:27 +0000 (0:00:00.129) 0:00:00.368 ********* 2026-04-07 00:41:35.063631 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:41:35.063643 | orchestrator | 2026-04-07 00:41:35.063672 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.063684 | orchestrator | Tuesday 07 April 2026 00:41:28 +0000 (0:00:01.150) 0:00:01.518 ********* 2026-04-07 00:41:35.063695 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:41:35.063706 | orchestrator | 2026-04-07 00:41:35.063717 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.063728 | orchestrator | 2026-04-07 00:41:35.063739 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.063750 | orchestrator | Tuesday 07 April 2026 00:41:28 +0000 (0:00:00.098) 0:00:01.617 ********* 2026-04-07 00:41:35.063761 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:41:35.063772 | orchestrator | 2026-04-07 00:41:35.063788 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.063806 | orchestrator | Tuesday 07 April 2026 00:41:28 +0000 (0:00:00.097) 0:00:01.714 ********* 2026-04-07 00:41:35.063825 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:41:35.063843 | orchestrator | 2026-04-07 00:41:35.063857 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.063872 | orchestrator | Tuesday 07 April 2026 00:41:29 +0000 (0:00:01.010) 0:00:02.724 ********* 2026-04-07 00:41:35.063892 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:41:35.063941 | orchestrator | 2026-04-07 00:41:35.063957 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.063970 | orchestrator | 2026-04-07 00:41:35.063983 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.063997 | orchestrator | Tuesday 07 April 2026 00:41:29 +0000 (0:00:00.096) 0:00:02.820 ********* 2026-04-07 00:41:35.064008 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:41:35.064019 | orchestrator | 2026-04-07 00:41:35.064030 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.064041 | orchestrator | Tuesday 07 April 2026 00:41:29 +0000 (0:00:00.088) 0:00:02.909 ********* 2026-04-07 00:41:35.064051 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:41:35.064062 | orchestrator | 2026-04-07 00:41:35.064073 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.064084 | orchestrator | Tuesday 07 April 2026 00:41:31 +0000 (0:00:01.023) 0:00:03.932 ********* 2026-04-07 00:41:35.064094 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:41:35.064105 | orchestrator | 2026-04-07 00:41:35.064115 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.064126 | orchestrator | 2026-04-07 00:41:35.064137 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.064148 | orchestrator | Tuesday 07 April 2026 00:41:31 +0000 (0:00:00.097) 0:00:04.030 ********* 2026-04-07 00:41:35.064163 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:41:35.064182 | orchestrator | 2026-04-07 00:41:35.064209 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.064228 | orchestrator | Tuesday 07 April 2026 00:41:31 +0000 (0:00:00.085) 0:00:04.116 ********* 2026-04-07 00:41:35.064246 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:41:35.064263 | orchestrator | 2026-04-07 00:41:35.064282 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.064301 | orchestrator | Tuesday 07 April 2026 00:41:32 +0000 (0:00:00.956) 0:00:05.072 ********* 2026-04-07 00:41:35.064320 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:41:35.064338 | orchestrator | 2026-04-07 00:41:35.064355 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.064366 | orchestrator | 2026-04-07 00:41:35.064377 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.064388 | orchestrator | Tuesday 07 April 2026 00:41:32 +0000 (0:00:00.096) 0:00:05.169 ********* 2026-04-07 00:41:35.064399 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:41:35.064409 | orchestrator | 2026-04-07 00:41:35.064420 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.064430 | orchestrator | Tuesday 07 April 2026 00:41:32 +0000 (0:00:00.164) 0:00:05.333 ********* 2026-04-07 00:41:35.064441 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:41:35.064452 | orchestrator | 2026-04-07 00:41:35.064463 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.064473 | orchestrator | Tuesday 07 April 2026 00:41:33 +0000 (0:00:01.003) 0:00:06.337 ********* 2026-04-07 00:41:35.064484 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:41:35.064495 | orchestrator | 2026-04-07 00:41:35.064576 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 00:41:35.064598 | orchestrator | 2026-04-07 00:41:35.064610 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 00:41:35.064621 | orchestrator | Tuesday 07 April 2026 00:41:33 +0000 (0:00:00.129) 0:00:06.466 ********* 2026-04-07 00:41:35.064632 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:41:35.064643 | orchestrator | 2026-04-07 00:41:35.064654 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 00:41:35.064665 | orchestrator | Tuesday 07 April 2026 00:41:33 +0000 (0:00:00.103) 0:00:06.570 ********* 2026-04-07 00:41:35.064676 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:41:35.064687 | orchestrator | 2026-04-07 00:41:35.064710 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 00:41:35.064722 | orchestrator | Tuesday 07 April 2026 00:41:34 +0000 (0:00:01.028) 0:00:07.598 ********* 2026-04-07 00:41:35.064753 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:41:35.064765 | orchestrator | 2026-04-07 00:41:35.064776 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:41:35.064788 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064801 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064820 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064832 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064843 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064854 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:41:35.064865 | orchestrator | 2026-04-07 00:41:35.064876 | orchestrator | 2026-04-07 00:41:35.064887 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:41:35.064898 | orchestrator | Tuesday 07 April 2026 00:41:34 +0000 (0:00:00.036) 0:00:07.635 ********* 2026-04-07 00:41:35.064909 | orchestrator | =============================================================================== 2026-04-07 00:41:35.064920 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.17s 2026-04-07 00:41:35.064931 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.67s 2026-04-07 00:41:35.064942 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-04-07 00:41:35.266221 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-07 00:41:46.595662 | orchestrator | 2026-04-07 00:41:46 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-07 00:41:46.668530 | orchestrator | 2026-04-07 00:41:46 | INFO  | Task 2fae3d6e-513f-40bd-8b7a-ad78e16dccc3 (wait-for-connection) was prepared for execution. 2026-04-07 00:41:46.668613 | orchestrator | 2026-04-07 00:41:46 | INFO  | It takes a moment until task 2fae3d6e-513f-40bd-8b7a-ad78e16dccc3 (wait-for-connection) has been started and output is visible here. 2026-04-07 00:42:01.704064 | orchestrator | 2026-04-07 00:42:01.704165 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-07 00:42:01.704180 | orchestrator | 2026-04-07 00:42:01.704191 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-07 00:42:01.704201 | orchestrator | Tuesday 07 April 2026 00:41:49 +0000 (0:00:00.326) 0:00:00.326 ********* 2026-04-07 00:42:01.704211 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:42:01.704222 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:42:01.704232 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:42:01.704242 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:42:01.704252 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:42:01.704262 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:42:01.704272 | orchestrator | 2026-04-07 00:42:01.704282 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:42:01.704292 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704315 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704351 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704362 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704372 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704381 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:01.704391 | orchestrator | 2026-04-07 00:42:01.704400 | orchestrator | 2026-04-07 00:42:01.704410 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:42:01.704420 | orchestrator | Tuesday 07 April 2026 00:42:01 +0000 (0:00:11.504) 0:00:11.831 ********* 2026-04-07 00:42:01.704429 | orchestrator | =============================================================================== 2026-04-07 00:42:01.704439 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2026-04-07 00:42:01.891000 | orchestrator | + osism apply hddtemp 2026-04-07 00:42:13.188447 | orchestrator | 2026-04-07 00:42:13 | INFO  | Prepare task for execution of hddtemp. 2026-04-07 00:42:13.272803 | orchestrator | 2026-04-07 00:42:13 | INFO  | Task d01869d0-8309-4821-ab81-dad70893bd9e (hddtemp) was prepared for execution. 2026-04-07 00:42:13.272928 | orchestrator | 2026-04-07 00:42:13 | INFO  | It takes a moment until task d01869d0-8309-4821-ab81-dad70893bd9e (hddtemp) has been started and output is visible here. 2026-04-07 00:42:40.029610 | orchestrator | 2026-04-07 00:42:40.029693 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-07 00:42:40.029703 | orchestrator | 2026-04-07 00:42:40.029709 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-07 00:42:40.029715 | orchestrator | Tuesday 07 April 2026 00:42:16 +0000 (0:00:00.335) 0:00:00.335 ********* 2026-04-07 00:42:40.029721 | orchestrator | ok: [testbed-manager] 2026-04-07 00:42:40.029728 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:42:40.029734 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:42:40.029739 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:42:40.029745 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:42:40.029762 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:42:40.029767 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:42:40.029773 | orchestrator | 2026-04-07 00:42:40.029778 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-07 00:42:40.029784 | orchestrator | Tuesday 07 April 2026 00:42:17 +0000 (0:00:00.593) 0:00:00.928 ********* 2026-04-07 00:42:40.029792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:42:40.029800 | orchestrator | 2026-04-07 00:42:40.029805 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-07 00:42:40.029811 | orchestrator | Tuesday 07 April 2026 00:42:18 +0000 (0:00:01.071) 0:00:02.000 ********* 2026-04-07 00:42:40.029816 | orchestrator | ok: [testbed-manager] 2026-04-07 00:42:40.029822 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:42:40.029827 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:42:40.029833 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:42:40.029838 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:42:40.029844 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:42:40.029849 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:42:40.029854 | orchestrator | 2026-04-07 00:42:40.029860 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-07 00:42:40.029866 | orchestrator | Tuesday 07 April 2026 00:42:20 +0000 (0:00:02.400) 0:00:04.401 ********* 2026-04-07 00:42:40.029871 | orchestrator | changed: [testbed-manager] 2026-04-07 00:42:40.029893 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:42:40.029899 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:42:40.029905 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:42:40.029910 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:42:40.029916 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:42:40.029921 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:42:40.029926 | orchestrator | 2026-04-07 00:42:40.029932 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-07 00:42:40.029937 | orchestrator | Tuesday 07 April 2026 00:42:21 +0000 (0:00:00.865) 0:00:05.266 ********* 2026-04-07 00:42:40.029943 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:42:40.029948 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:42:40.029953 | orchestrator | ok: [testbed-manager] 2026-04-07 00:42:40.029959 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:42:40.029964 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:42:40.029969 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:42:40.029975 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:42:40.029980 | orchestrator | 2026-04-07 00:42:40.029985 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-07 00:42:40.029991 | orchestrator | Tuesday 07 April 2026 00:42:22 +0000 (0:00:01.332) 0:00:06.598 ********* 2026-04-07 00:42:40.029996 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:42:40.030002 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:42:40.030007 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:42:40.030052 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:42:40.030058 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:42:40.030064 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:42:40.030069 | orchestrator | changed: [testbed-manager] 2026-04-07 00:42:40.030075 | orchestrator | 2026-04-07 00:42:40.030080 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-07 00:42:40.030085 | orchestrator | Tuesday 07 April 2026 00:42:23 +0000 (0:00:00.596) 0:00:07.195 ********* 2026-04-07 00:42:40.030091 | orchestrator | changed: [testbed-manager] 2026-04-07 00:42:40.030096 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:42:40.030102 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:42:40.030107 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:42:40.030112 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:42:40.030118 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:42:40.030124 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:42:40.030129 | orchestrator | 2026-04-07 00:42:40.030135 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-07 00:42:40.030140 | orchestrator | Tuesday 07 April 2026 00:42:36 +0000 (0:00:13.298) 0:00:20.493 ********* 2026-04-07 00:42:40.030146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:42:40.030152 | orchestrator | 2026-04-07 00:42:40.030157 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-07 00:42:40.030163 | orchestrator | Tuesday 07 April 2026 00:42:37 +0000 (0:00:01.070) 0:00:21.564 ********* 2026-04-07 00:42:40.030169 | orchestrator | changed: [testbed-manager] 2026-04-07 00:42:40.030176 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:42:40.030182 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:42:40.030189 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:42:40.030195 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:42:40.030202 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:42:40.030208 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:42:40.030215 | orchestrator | 2026-04-07 00:42:40.030221 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:42:40.030228 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:42:40.030255 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030262 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030269 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030279 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030286 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030292 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:42:40.030299 | orchestrator | 2026-04-07 00:42:40.030306 | orchestrator | 2026-04-07 00:42:40.030312 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:42:40.030319 | orchestrator | Tuesday 07 April 2026 00:42:39 +0000 (0:00:01.930) 0:00:23.494 ********* 2026-04-07 00:42:40.030326 | orchestrator | =============================================================================== 2026-04-07 00:42:40.030332 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.30s 2026-04-07 00:42:40.030338 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.40s 2026-04-07 00:42:40.030344 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2026-04-07 00:42:40.030351 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2026-04-07 00:42:40.030357 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.07s 2026-04-07 00:42:40.030363 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.07s 2026-04-07 00:42:40.030370 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.87s 2026-04-07 00:42:40.030376 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.60s 2026-04-07 00:42:40.030384 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.59s 2026-04-07 00:42:40.226259 | orchestrator | ++ semver latest 7.1.1 2026-04-07 00:42:40.284185 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:42:40.284263 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-07 00:42:40.284273 | orchestrator | + sudo systemctl restart manager.service 2026-04-07 00:42:53.650118 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 00:42:53.650235 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-07 00:42:53.650257 | orchestrator | + local max_attempts=60 2026-04-07 00:42:53.650275 | orchestrator | + local name=ceph-ansible 2026-04-07 00:42:53.650289 | orchestrator | + local attempt_num=1 2026-04-07 00:42:53.650304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:42:53.691916 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:42:53.692011 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:42:53.692026 | orchestrator | + sleep 5 2026-04-07 00:42:58.696884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:42:58.721746 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:42:58.721841 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:42:58.721856 | orchestrator | + sleep 5 2026-04-07 00:43:03.724151 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:03.756909 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:03.756977 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:03.756985 | orchestrator | + sleep 5 2026-04-07 00:43:08.762111 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:08.801786 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:08.801871 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:08.801911 | orchestrator | + sleep 5 2026-04-07 00:43:13.807583 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:13.849304 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:13.849396 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:13.849405 | orchestrator | + sleep 5 2026-04-07 00:43:18.854767 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:18.893231 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:18.893335 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:18.893351 | orchestrator | + sleep 5 2026-04-07 00:43:23.898475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:23.939099 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:23.939196 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:23.939207 | orchestrator | + sleep 5 2026-04-07 00:43:28.943186 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:28.972550 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:28.972668 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:28.972692 | orchestrator | + sleep 5 2026-04-07 00:43:33.976020 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:34.014582 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:34.014696 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:34.014712 | orchestrator | + sleep 5 2026-04-07 00:43:39.018741 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:39.051092 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:39.051193 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:39.051209 | orchestrator | + sleep 5 2026-04-07 00:43:44.056018 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:44.096974 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:44.097076 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:44.097090 | orchestrator | + sleep 5 2026-04-07 00:43:49.102121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:49.133981 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:49.134173 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:49.134203 | orchestrator | + sleep 5 2026-04-07 00:43:54.139258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:54.180728 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:54.180825 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 00:43:54.180835 | orchestrator | + sleep 5 2026-04-07 00:43:59.185374 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 00:43:59.221854 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:59.221940 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-07 00:43:59.221951 | orchestrator | + local max_attempts=60 2026-04-07 00:43:59.221959 | orchestrator | + local name=kolla-ansible 2026-04-07 00:43:59.221967 | orchestrator | + local attempt_num=1 2026-04-07 00:43:59.222304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-07 00:43:59.255215 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:59.255279 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-07 00:43:59.255285 | orchestrator | + local max_attempts=60 2026-04-07 00:43:59.255290 | orchestrator | + local name=osism-ansible 2026-04-07 00:43:59.255295 | orchestrator | + local attempt_num=1 2026-04-07 00:43:59.256072 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-07 00:43:59.291547 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 00:43:59.291632 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-07 00:43:59.291642 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-07 00:43:59.438386 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-07 00:43:59.589781 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-07 00:43:59.748170 | orchestrator | ARA in osism-ansible already disabled. 2026-04-07 00:43:59.899379 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-07 00:43:59.899589 | orchestrator | + osism apply gather-facts 2026-04-07 00:44:11.359033 | orchestrator | 2026-04-07 00:44:11 | INFO  | Prepare task for execution of gather-facts. 2026-04-07 00:44:11.443989 | orchestrator | 2026-04-07 00:44:11 | INFO  | Task 8f036fff-d487-4b56-b7d4-556547fa88a3 (gather-facts) was prepared for execution. 2026-04-07 00:44:11.444120 | orchestrator | 2026-04-07 00:44:11 | INFO  | It takes a moment until task 8f036fff-d487-4b56-b7d4-556547fa88a3 (gather-facts) has been started and output is visible here. 2026-04-07 00:44:21.638591 | orchestrator | 2026-04-07 00:44:21.638668 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 00:44:21.638676 | orchestrator | 2026-04-07 00:44:21.638696 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:44:21.638701 | orchestrator | Tuesday 07 April 2026 00:44:14 +0000 (0:00:00.288) 0:00:00.288 ********* 2026-04-07 00:44:21.638705 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:44:21.638710 | orchestrator | ok: [testbed-manager] 2026-04-07 00:44:21.638715 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:44:21.638719 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:44:21.638723 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:44:21.638727 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:44:21.638731 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:44:21.638735 | orchestrator | 2026-04-07 00:44:21.638740 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 00:44:21.638744 | orchestrator | 2026-04-07 00:44:21.638748 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 00:44:21.638752 | orchestrator | Tuesday 07 April 2026 00:44:20 +0000 (0:00:06.214) 0:00:06.503 ********* 2026-04-07 00:44:21.638756 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:44:21.638761 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:44:21.638765 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:44:21.638769 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:44:21.638773 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:44:21.638777 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:44:21.638781 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:44:21.638784 | orchestrator | 2026-04-07 00:44:21.638788 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:44:21.638792 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638798 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638802 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638806 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638809 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638813 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638817 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 00:44:21.638821 | orchestrator | 2026-04-07 00:44:21.638825 | orchestrator | 2026-04-07 00:44:21.638829 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:44:21.638833 | orchestrator | Tuesday 07 April 2026 00:44:21 +0000 (0:00:00.545) 0:00:07.048 ********* 2026-04-07 00:44:21.638836 | orchestrator | =============================================================================== 2026-04-07 00:44:21.638840 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.21s 2026-04-07 00:44:21.638844 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-07 00:44:21.775384 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-07 00:44:21.792056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-07 00:44:21.803193 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-07 00:44:21.820301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-07 00:44:21.835479 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-07 00:44:21.848374 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-07 00:44:21.858741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-07 00:44:21.868668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-07 00:44:21.879798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-07 00:44:21.889318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-07 00:44:21.898320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-07 00:44:21.907522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-07 00:44:21.922865 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-07 00:44:21.932397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-07 00:44:21.944685 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-07 00:44:21.956456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-07 00:44:21.966571 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-07 00:44:21.977889 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-07 00:44:21.987480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-07 00:44:21.996945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-07 00:44:22.006096 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-07 00:44:22.025122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-07 00:44:22.040567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-07 00:44:22.058338 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-07 00:44:22.396175 | orchestrator | ok: Runtime: 0:23:57.539985 2026-04-07 00:44:22.496639 | 2026-04-07 00:44:22.496768 | TASK [Deploy services] 2026-04-07 00:44:23.028165 | orchestrator | skipping: Conditional result was False 2026-04-07 00:44:23.045338 | 2026-04-07 00:44:23.045501 | TASK [Deploy in a nutshell] 2026-04-07 00:44:23.755274 | orchestrator | + set -e 2026-04-07 00:44:23.755479 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 00:44:23.755504 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 00:44:23.755524 | orchestrator | ++ INTERACTIVE=false 2026-04-07 00:44:23.755536 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 00:44:23.755547 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 00:44:23.755559 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 00:44:23.755598 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 00:44:23.755623 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 00:44:23.755635 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 00:44:23.755649 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 00:44:23.755660 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 00:44:23.755676 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 00:44:23.755686 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-07 00:44:23.755704 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-07 00:44:23.755727 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 00:44:23.755740 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 00:44:23.755750 | orchestrator | ++ export ARA=false 2026-04-07 00:44:23.755769 | orchestrator | ++ ARA=false 2026-04-07 00:44:23.755779 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 00:44:23.755789 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 00:44:23.755799 | orchestrator | ++ export TEMPEST=true 2026-04-07 00:44:23.755808 | orchestrator | ++ TEMPEST=true 2026-04-07 00:44:23.755818 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 00:44:23.755827 | orchestrator | ++ IS_ZUUL=true 2026-04-07 00:44:23.755837 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:44:23.755847 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.15 2026-04-07 00:44:23.755857 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 00:44:23.755866 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 00:44:23.755875 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 00:44:23.755885 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 00:44:23.755895 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 00:44:23.755904 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 00:44:23.755914 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 00:44:23.755924 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 00:44:23.755934 | orchestrator | + echo 2026-04-07 00:44:23.755944 | orchestrator | 2026-04-07 00:44:23.755953 | orchestrator | # PULL IMAGES 2026-04-07 00:44:23.755963 | orchestrator | 2026-04-07 00:44:23.755995 | orchestrator | + echo '# PULL IMAGES' 2026-04-07 00:44:23.756006 | orchestrator | + echo 2026-04-07 00:44:23.756891 | orchestrator | ++ semver latest 7.0.0 2026-04-07 00:44:23.797687 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 00:44:23.797778 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-07 00:44:23.797808 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-07 00:44:24.935790 | orchestrator | 2026-04-07 00:44:24 | INFO  | Trying to run play pull-images in environment custom 2026-04-07 00:44:35.051097 | orchestrator | 2026-04-07 00:44:35 | INFO  | Prepare task for execution of pull-images. 2026-04-07 00:44:35.125693 | orchestrator | 2026-04-07 00:44:35 | INFO  | Task 31d6faa6-20a9-4e96-8cbb-20c2298fdd3c (pull-images) was prepared for execution. 2026-04-07 00:44:35.125800 | orchestrator | 2026-04-07 00:44:35 | INFO  | Task 31d6faa6-20a9-4e96-8cbb-20c2298fdd3c is running in background. No more output. Check ARA for logs. 2026-04-07 00:44:36.642256 | orchestrator | 2026-04-07 00:44:36 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-07 00:44:46.703249 | orchestrator | 2026-04-07 00:44:46 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-07 00:44:46.773766 | orchestrator | 2026-04-07 00:44:46 | INFO  | Task 45d0a469-2978-468c-9e60-3945ceb6648c (wipe-partitions) was prepared for execution. 2026-04-07 00:44:46.773904 | orchestrator | 2026-04-07 00:44:46 | INFO  | It takes a moment until task 45d0a469-2978-468c-9e60-3945ceb6648c (wipe-partitions) has been started and output is visible here. 2026-04-07 00:44:58.836933 | orchestrator | 2026-04-07 00:44:58.837026 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-07 00:44:58.837044 | orchestrator | 2026-04-07 00:44:58.837057 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-07 00:44:58.837077 | orchestrator | Tuesday 07 April 2026 00:44:49 +0000 (0:00:00.153) 0:00:00.153 ********* 2026-04-07 00:44:58.837128 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:44:58.837142 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:44:58.837153 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:44:58.837164 | orchestrator | 2026-04-07 00:44:58.837175 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-07 00:44:58.837186 | orchestrator | Tuesday 07 April 2026 00:44:51 +0000 (0:00:01.032) 0:00:01.185 ********* 2026-04-07 00:44:58.837202 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:44:58.837213 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:44:58.837224 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:44:58.837235 | orchestrator | 2026-04-07 00:44:58.837249 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-07 00:44:58.837266 | orchestrator | Tuesday 07 April 2026 00:44:51 +0000 (0:00:00.230) 0:00:01.416 ********* 2026-04-07 00:44:58.837283 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:44:58.837295 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:44:58.837306 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:44:58.837317 | orchestrator | 2026-04-07 00:44:58.837327 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-07 00:44:58.837339 | orchestrator | Tuesday 07 April 2026 00:44:51 +0000 (0:00:00.550) 0:00:01.967 ********* 2026-04-07 00:44:58.837350 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:44:58.837361 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:44:58.837371 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:44:58.837382 | orchestrator | 2026-04-07 00:44:58.837475 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-07 00:44:58.837489 | orchestrator | Tuesday 07 April 2026 00:44:52 +0000 (0:00:00.247) 0:00:02.215 ********* 2026-04-07 00:44:58.837501 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 00:44:58.837519 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 00:44:58.837533 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 00:44:58.837545 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 00:44:58.837558 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 00:44:58.837571 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 00:44:58.837583 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 00:44:58.837596 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 00:44:58.837609 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 00:44:58.837622 | orchestrator | 2026-04-07 00:44:58.837634 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-07 00:44:58.837647 | orchestrator | Tuesday 07 April 2026 00:44:53 +0000 (0:00:01.357) 0:00:03.573 ********* 2026-04-07 00:44:58.837660 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 00:44:58.837674 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 00:44:58.837686 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 00:44:58.837699 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 00:44:58.837712 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 00:44:58.837724 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 00:44:58.837736 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 00:44:58.837748 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 00:44:58.837761 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 00:44:58.837773 | orchestrator | 2026-04-07 00:44:58.837793 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-07 00:44:58.837806 | orchestrator | Tuesday 07 April 2026 00:44:54 +0000 (0:00:01.504) 0:00:05.077 ********* 2026-04-07 00:44:58.837820 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 00:44:58.837831 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 00:44:58.837842 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 00:44:58.837852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 00:44:58.837873 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 00:44:58.837884 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 00:44:58.837895 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 00:44:58.837906 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 00:44:58.837916 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 00:44:58.837928 | orchestrator | 2026-04-07 00:44:58.837948 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-07 00:44:58.837967 | orchestrator | Tuesday 07 April 2026 00:44:57 +0000 (0:00:02.253) 0:00:07.331 ********* 2026-04-07 00:44:58.837998 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:44:58.838086 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:44:58.838108 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:44:58.838126 | orchestrator | 2026-04-07 00:44:58.838144 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-07 00:44:58.838163 | orchestrator | Tuesday 07 April 2026 00:44:57 +0000 (0:00:00.619) 0:00:07.950 ********* 2026-04-07 00:44:58.838183 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:44:58.838201 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:44:58.838218 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:44:58.838230 | orchestrator | 2026-04-07 00:44:58.838241 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:44:58.838254 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:44:58.838266 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:44:58.838298 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:44:58.838309 | orchestrator | 2026-04-07 00:44:58.838320 | orchestrator | 2026-04-07 00:44:58.838331 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:44:58.838342 | orchestrator | Tuesday 07 April 2026 00:44:58 +0000 (0:00:00.847) 0:00:08.798 ********* 2026-04-07 00:44:58.838353 | orchestrator | =============================================================================== 2026-04-07 00:44:58.838363 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.25s 2026-04-07 00:44:58.838374 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.50s 2026-04-07 00:44:58.838408 | orchestrator | Check device availability ----------------------------------------------- 1.36s 2026-04-07 00:44:58.838420 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.03s 2026-04-07 00:44:58.838430 | orchestrator | Request device events from the kernel ----------------------------------- 0.85s 2026-04-07 00:44:58.838441 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-04-07 00:44:58.838452 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-04-07 00:44:58.838463 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-04-07 00:44:58.838473 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2026-04-07 00:45:10.279708 | orchestrator | 2026-04-07 00:45:10 | INFO  | Prepare task for execution of facts. 2026-04-07 00:45:10.352564 | orchestrator | 2026-04-07 00:45:10 | INFO  | Task 2e236ae1-57d3-422d-895c-39765963a3f8 (facts) was prepared for execution. 2026-04-07 00:45:10.352648 | orchestrator | 2026-04-07 00:45:10 | INFO  | It takes a moment until task 2e236ae1-57d3-422d-895c-39765963a3f8 (facts) has been started and output is visible here. 2026-04-07 00:45:21.041751 | orchestrator | 2026-04-07 00:45:21.041873 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 00:45:21.041890 | orchestrator | 2026-04-07 00:45:21.041931 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 00:45:21.041943 | orchestrator | Tuesday 07 April 2026 00:45:13 +0000 (0:00:00.296) 0:00:00.296 ********* 2026-04-07 00:45:21.041955 | orchestrator | ok: [testbed-manager] 2026-04-07 00:45:21.041967 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:45:21.041978 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:45:21.041989 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:45:21.041999 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:21.042010 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:21.042082 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:45:21.042094 | orchestrator | 2026-04-07 00:45:21.042105 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 00:45:21.042116 | orchestrator | Tuesday 07 April 2026 00:45:14 +0000 (0:00:01.287) 0:00:01.584 ********* 2026-04-07 00:45:21.042127 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:45:21.042139 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:45:21.042149 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:45:21.042160 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:45:21.042171 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:21.042182 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:21.042192 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:21.042203 | orchestrator | 2026-04-07 00:45:21.042214 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 00:45:21.042242 | orchestrator | 2026-04-07 00:45:21.042254 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:45:21.042266 | orchestrator | Tuesday 07 April 2026 00:45:15 +0000 (0:00:01.052) 0:00:02.637 ********* 2026-04-07 00:45:21.042278 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:45:21.042291 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:45:21.042304 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:45:21.042317 | orchestrator | ok: [testbed-manager] 2026-04-07 00:45:21.042330 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:21.042342 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:45:21.042354 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:21.042366 | orchestrator | 2026-04-07 00:45:21.042405 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 00:45:21.042418 | orchestrator | 2026-04-07 00:45:21.042431 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 00:45:21.042445 | orchestrator | Tuesday 07 April 2026 00:45:20 +0000 (0:00:04.691) 0:00:07.328 ********* 2026-04-07 00:45:21.042457 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:45:21.042470 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:45:21.042482 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:45:21.042495 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:45:21.042508 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:21.042520 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:21.042533 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:21.042546 | orchestrator | 2026-04-07 00:45:21.042560 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:45:21.042576 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042590 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042603 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042617 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042630 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042652 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042663 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:45:21.042674 | orchestrator | 2026-04-07 00:45:21.042685 | orchestrator | 2026-04-07 00:45:21.042696 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:45:21.042706 | orchestrator | Tuesday 07 April 2026 00:45:20 +0000 (0:00:00.457) 0:00:07.786 ********* 2026-04-07 00:45:21.042717 | orchestrator | =============================================================================== 2026-04-07 00:45:21.042728 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2026-04-07 00:45:21.042739 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-07 00:45:21.042750 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-04-07 00:45:21.042761 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-04-07 00:45:22.402705 | orchestrator | 2026-04-07 00:45:22 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-07 00:45:22.460817 | orchestrator | 2026-04-07 00:45:22 | INFO  | Task 1a0f86cf-a105-44dc-9319-bd546b66c3ec (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-07 00:45:22.460901 | orchestrator | 2026-04-07 00:45:22 | INFO  | It takes a moment until task 1a0f86cf-a105-44dc-9319-bd546b66c3ec (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-07 00:45:32.968241 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 00:45:32.968354 | orchestrator | 2.16.14 2026-04-07 00:45:32.968371 | orchestrator | 2026-04-07 00:45:32.968417 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 00:45:32.968431 | orchestrator | 2026-04-07 00:45:32.968444 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:45:32.968457 | orchestrator | Tuesday 07 April 2026 00:45:26 +0000 (0:00:00.261) 0:00:00.261 ********* 2026-04-07 00:45:32.968471 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 00:45:32.968483 | orchestrator | 2026-04-07 00:45:32.968496 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:45:32.968509 | orchestrator | Tuesday 07 April 2026 00:45:26 +0000 (0:00:00.215) 0:00:00.476 ********* 2026-04-07 00:45:32.968523 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:32.968536 | orchestrator | 2026-04-07 00:45:32.968548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968560 | orchestrator | Tuesday 07 April 2026 00:45:27 +0000 (0:00:00.204) 0:00:00.680 ********* 2026-04-07 00:45:32.968583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-07 00:45:32.968596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-07 00:45:32.968608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-07 00:45:32.968621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-07 00:45:32.968633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-07 00:45:32.968645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-07 00:45:32.968658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-07 00:45:32.968669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-07 00:45:32.968681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-07 00:45:32.968693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-07 00:45:32.968731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-07 00:45:32.968743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-07 00:45:32.968757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-07 00:45:32.968770 | orchestrator | 2026-04-07 00:45:32.968783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968797 | orchestrator | Tuesday 07 April 2026 00:45:27 +0000 (0:00:00.333) 0:00:01.013 ********* 2026-04-07 00:45:32.968809 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.968822 | orchestrator | 2026-04-07 00:45:32.968834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968847 | orchestrator | Tuesday 07 April 2026 00:45:27 +0000 (0:00:00.374) 0:00:01.388 ********* 2026-04-07 00:45:32.968859 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.968870 | orchestrator | 2026-04-07 00:45:32.968881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968898 | orchestrator | Tuesday 07 April 2026 00:45:27 +0000 (0:00:00.163) 0:00:01.551 ********* 2026-04-07 00:45:32.968909 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.968919 | orchestrator | 2026-04-07 00:45:32.968930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968941 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.172) 0:00:01.723 ********* 2026-04-07 00:45:32.968953 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.968964 | orchestrator | 2026-04-07 00:45:32.968975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.968986 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.170) 0:00:01.894 ********* 2026-04-07 00:45:32.968997 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969007 | orchestrator | 2026-04-07 00:45:32.969018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969029 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.166) 0:00:02.061 ********* 2026-04-07 00:45:32.969040 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969051 | orchestrator | 2026-04-07 00:45:32.969063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969074 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.178) 0:00:02.240 ********* 2026-04-07 00:45:32.969085 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969095 | orchestrator | 2026-04-07 00:45:32.969105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969115 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.183) 0:00:02.423 ********* 2026-04-07 00:45:32.969125 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969135 | orchestrator | 2026-04-07 00:45:32.969146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969156 | orchestrator | Tuesday 07 April 2026 00:45:28 +0000 (0:00:00.183) 0:00:02.607 ********* 2026-04-07 00:45:32.969167 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1) 2026-04-07 00:45:32.969179 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1) 2026-04-07 00:45:32.969189 | orchestrator | 2026-04-07 00:45:32.969200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969230 | orchestrator | Tuesday 07 April 2026 00:45:29 +0000 (0:00:00.366) 0:00:02.973 ********* 2026-04-07 00:45:32.969240 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706) 2026-04-07 00:45:32.969251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706) 2026-04-07 00:45:32.969260 | orchestrator | 2026-04-07 00:45:32.969277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969297 | orchestrator | Tuesday 07 April 2026 00:45:29 +0000 (0:00:00.366) 0:00:03.339 ********* 2026-04-07 00:45:32.969307 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4) 2026-04-07 00:45:32.969317 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4) 2026-04-07 00:45:32.969327 | orchestrator | 2026-04-07 00:45:32.969337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969347 | orchestrator | Tuesday 07 April 2026 00:45:30 +0000 (0:00:00.499) 0:00:03.839 ********* 2026-04-07 00:45:32.969357 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73) 2026-04-07 00:45:32.969367 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73) 2026-04-07 00:45:32.969398 | orchestrator | 2026-04-07 00:45:32.969409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:32.969419 | orchestrator | Tuesday 07 April 2026 00:45:30 +0000 (0:00:00.582) 0:00:04.421 ********* 2026-04-07 00:45:32.969429 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:45:32.969439 | orchestrator | 2026-04-07 00:45:32.969449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969460 | orchestrator | Tuesday 07 April 2026 00:45:31 +0000 (0:00:00.584) 0:00:05.006 ********* 2026-04-07 00:45:32.969470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-07 00:45:32.969480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-07 00:45:32.969490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-07 00:45:32.969500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-07 00:45:32.969510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-07 00:45:32.969519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-07 00:45:32.969529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-07 00:45:32.969539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-07 00:45:32.969549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-07 00:45:32.969559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-07 00:45:32.969569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-07 00:45:32.969578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-07 00:45:32.969588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-07 00:45:32.969598 | orchestrator | 2026-04-07 00:45:32.969608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969618 | orchestrator | Tuesday 07 April 2026 00:45:31 +0000 (0:00:00.338) 0:00:05.345 ********* 2026-04-07 00:45:32.969628 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969638 | orchestrator | 2026-04-07 00:45:32.969648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969658 | orchestrator | Tuesday 07 April 2026 00:45:31 +0000 (0:00:00.183) 0:00:05.528 ********* 2026-04-07 00:45:32.969669 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969679 | orchestrator | 2026-04-07 00:45:32.969688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969699 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.176) 0:00:05.705 ********* 2026-04-07 00:45:32.969710 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969727 | orchestrator | 2026-04-07 00:45:32.969737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969748 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.180) 0:00:05.886 ********* 2026-04-07 00:45:32.969758 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969768 | orchestrator | 2026-04-07 00:45:32.969778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969788 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.177) 0:00:06.063 ********* 2026-04-07 00:45:32.969798 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969808 | orchestrator | 2026-04-07 00:45:32.969818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969828 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.181) 0:00:06.244 ********* 2026-04-07 00:45:32.969837 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969847 | orchestrator | 2026-04-07 00:45:32.969857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:32.969867 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.173) 0:00:06.418 ********* 2026-04-07 00:45:32.969877 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:32.969887 | orchestrator | 2026-04-07 00:45:32.969905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744020 | orchestrator | Tuesday 07 April 2026 00:45:32 +0000 (0:00:00.192) 0:00:06.611 ********* 2026-04-07 00:45:39.744107 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744119 | orchestrator | 2026-04-07 00:45:39.744127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744134 | orchestrator | Tuesday 07 April 2026 00:45:33 +0000 (0:00:00.174) 0:00:06.785 ********* 2026-04-07 00:45:39.744141 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-07 00:45:39.744149 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-07 00:45:39.744156 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-07 00:45:39.744163 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-07 00:45:39.744169 | orchestrator | 2026-04-07 00:45:39.744176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744201 | orchestrator | Tuesday 07 April 2026 00:45:33 +0000 (0:00:00.847) 0:00:07.633 ********* 2026-04-07 00:45:39.744209 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744217 | orchestrator | 2026-04-07 00:45:39.744224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744231 | orchestrator | Tuesday 07 April 2026 00:45:34 +0000 (0:00:00.191) 0:00:07.824 ********* 2026-04-07 00:45:39.744237 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744244 | orchestrator | 2026-04-07 00:45:39.744250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744257 | orchestrator | Tuesday 07 April 2026 00:45:34 +0000 (0:00:00.170) 0:00:07.994 ********* 2026-04-07 00:45:39.744264 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744270 | orchestrator | 2026-04-07 00:45:39.744276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:39.744282 | orchestrator | Tuesday 07 April 2026 00:45:34 +0000 (0:00:00.178) 0:00:08.173 ********* 2026-04-07 00:45:39.744288 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744294 | orchestrator | 2026-04-07 00:45:39.744300 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 00:45:39.744306 | orchestrator | Tuesday 07 April 2026 00:45:34 +0000 (0:00:00.202) 0:00:08.375 ********* 2026-04-07 00:45:39.744312 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-07 00:45:39.744319 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-07 00:45:39.744325 | orchestrator | 2026-04-07 00:45:39.744331 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 00:45:39.744341 | orchestrator | Tuesday 07 April 2026 00:45:34 +0000 (0:00:00.174) 0:00:08.549 ********* 2026-04-07 00:45:39.744367 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744442 | orchestrator | 2026-04-07 00:45:39.744447 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 00:45:39.744451 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.125) 0:00:08.675 ********* 2026-04-07 00:45:39.744454 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744458 | orchestrator | 2026-04-07 00:45:39.744462 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 00:45:39.744466 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.133) 0:00:08.809 ********* 2026-04-07 00:45:39.744469 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744473 | orchestrator | 2026-04-07 00:45:39.744477 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 00:45:39.744481 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.122) 0:00:08.932 ********* 2026-04-07 00:45:39.744485 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:39.744489 | orchestrator | 2026-04-07 00:45:39.744492 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 00:45:39.744496 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.129) 0:00:09.061 ********* 2026-04-07 00:45:39.744501 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68f67d56-373d-5470-8a0c-a7bd578cf9eb'}}) 2026-04-07 00:45:39.744505 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}}) 2026-04-07 00:45:39.744509 | orchestrator | 2026-04-07 00:45:39.744513 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 00:45:39.744517 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.170) 0:00:09.232 ********* 2026-04-07 00:45:39.744521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68f67d56-373d-5470-8a0c-a7bd578cf9eb'}})  2026-04-07 00:45:39.744532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}})  2026-04-07 00:45:39.744541 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744545 | orchestrator | 2026-04-07 00:45:39.744549 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 00:45:39.744553 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.127) 0:00:09.360 ********* 2026-04-07 00:45:39.744556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68f67d56-373d-5470-8a0c-a7bd578cf9eb'}})  2026-04-07 00:45:39.744560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}})  2026-04-07 00:45:39.744565 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744571 | orchestrator | 2026-04-07 00:45:39.744577 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 00:45:39.744584 | orchestrator | Tuesday 07 April 2026 00:45:35 +0000 (0:00:00.278) 0:00:09.639 ********* 2026-04-07 00:45:39.744590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68f67d56-373d-5470-8a0c-a7bd578cf9eb'}})  2026-04-07 00:45:39.744611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}})  2026-04-07 00:45:39.744619 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744626 | orchestrator | 2026-04-07 00:45:39.744633 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 00:45:39.744640 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.141) 0:00:09.780 ********* 2026-04-07 00:45:39.744647 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:39.744654 | orchestrator | 2026-04-07 00:45:39.744661 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 00:45:39.744668 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.129) 0:00:09.909 ********* 2026-04-07 00:45:39.744675 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:45:39.744690 | orchestrator | 2026-04-07 00:45:39.744697 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 00:45:39.744704 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.123) 0:00:10.032 ********* 2026-04-07 00:45:39.744713 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744720 | orchestrator | 2026-04-07 00:45:39.744728 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 00:45:39.744736 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.117) 0:00:10.150 ********* 2026-04-07 00:45:39.744743 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744750 | orchestrator | 2026-04-07 00:45:39.744757 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 00:45:39.744764 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.120) 0:00:10.270 ********* 2026-04-07 00:45:39.744771 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744778 | orchestrator | 2026-04-07 00:45:39.744785 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 00:45:39.744792 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.115) 0:00:10.386 ********* 2026-04-07 00:45:39.744800 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:45:39.744807 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:45:39.744814 | orchestrator |  "sdb": { 2026-04-07 00:45:39.744821 | orchestrator |  "osd_lvm_uuid": "68f67d56-373d-5470-8a0c-a7bd578cf9eb" 2026-04-07 00:45:39.744828 | orchestrator |  }, 2026-04-07 00:45:39.744835 | orchestrator |  "sdc": { 2026-04-07 00:45:39.744846 | orchestrator |  "osd_lvm_uuid": "eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d" 2026-04-07 00:45:39.744853 | orchestrator |  } 2026-04-07 00:45:39.744860 | orchestrator |  } 2026-04-07 00:45:39.744868 | orchestrator | } 2026-04-07 00:45:39.744875 | orchestrator | 2026-04-07 00:45:39.744883 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 00:45:39.744890 | orchestrator | Tuesday 07 April 2026 00:45:36 +0000 (0:00:00.133) 0:00:10.519 ********* 2026-04-07 00:45:39.744897 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744904 | orchestrator | 2026-04-07 00:45:39.744912 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 00:45:39.744919 | orchestrator | Tuesday 07 April 2026 00:45:37 +0000 (0:00:00.138) 0:00:10.658 ********* 2026-04-07 00:45:39.744926 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744933 | orchestrator | 2026-04-07 00:45:39.744940 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 00:45:39.744947 | orchestrator | Tuesday 07 April 2026 00:45:37 +0000 (0:00:00.118) 0:00:10.777 ********* 2026-04-07 00:45:39.744954 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:45:39.744961 | orchestrator | 2026-04-07 00:45:39.744968 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 00:45:39.744978 | orchestrator | Tuesday 07 April 2026 00:45:37 +0000 (0:00:00.165) 0:00:10.942 ********* 2026-04-07 00:45:39.744986 | orchestrator | changed: [testbed-node-3] => { 2026-04-07 00:45:39.744995 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 00:45:39.745001 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:45:39.745008 | orchestrator |  "sdb": { 2026-04-07 00:45:39.745015 | orchestrator |  "osd_lvm_uuid": "68f67d56-373d-5470-8a0c-a7bd578cf9eb" 2026-04-07 00:45:39.745022 | orchestrator |  }, 2026-04-07 00:45:39.745029 | orchestrator |  "sdc": { 2026-04-07 00:45:39.745036 | orchestrator |  "osd_lvm_uuid": "eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d" 2026-04-07 00:45:39.745042 | orchestrator |  } 2026-04-07 00:45:39.745049 | orchestrator |  }, 2026-04-07 00:45:39.745055 | orchestrator |  "lvm_volumes": [ 2026-04-07 00:45:39.745062 | orchestrator |  { 2026-04-07 00:45:39.745068 | orchestrator |  "data": "osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb", 2026-04-07 00:45:39.745075 | orchestrator |  "data_vg": "ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb" 2026-04-07 00:45:39.745086 | orchestrator |  }, 2026-04-07 00:45:39.745095 | orchestrator |  { 2026-04-07 00:45:39.745102 | orchestrator |  "data": "osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d", 2026-04-07 00:45:39.745109 | orchestrator |  "data_vg": "ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d" 2026-04-07 00:45:39.745115 | orchestrator |  } 2026-04-07 00:45:39.745122 | orchestrator |  ] 2026-04-07 00:45:39.745128 | orchestrator |  } 2026-04-07 00:45:39.745135 | orchestrator | } 2026-04-07 00:45:39.745141 | orchestrator | 2026-04-07 00:45:39.745148 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 00:45:39.745154 | orchestrator | Tuesday 07 April 2026 00:45:37 +0000 (0:00:00.198) 0:00:11.140 ********* 2026-04-07 00:45:39.745161 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 00:45:39.745167 | orchestrator | 2026-04-07 00:45:39.745173 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 00:45:39.745180 | orchestrator | 2026-04-07 00:45:39.745186 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:45:39.745192 | orchestrator | Tuesday 07 April 2026 00:45:39 +0000 (0:00:01.806) 0:00:12.947 ********* 2026-04-07 00:45:39.745202 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 00:45:39.745209 | orchestrator | 2026-04-07 00:45:39.745216 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:45:39.745223 | orchestrator | Tuesday 07 April 2026 00:45:39 +0000 (0:00:00.252) 0:00:13.199 ********* 2026-04-07 00:45:39.745229 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:39.745236 | orchestrator | 2026-04-07 00:45:39.745248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250501 | orchestrator | Tuesday 07 April 2026 00:45:39 +0000 (0:00:00.191) 0:00:13.391 ********* 2026-04-07 00:45:46.250597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-07 00:45:46.250609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-07 00:45:46.250616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-07 00:45:46.250622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-07 00:45:46.250628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-07 00:45:46.250634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-07 00:45:46.250640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-07 00:45:46.250649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-07 00:45:46.250656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-07 00:45:46.250662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-07 00:45:46.250668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-07 00:45:46.250673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-07 00:45:46.250697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-07 00:45:46.250704 | orchestrator | 2026-04-07 00:45:46.250711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250718 | orchestrator | Tuesday 07 April 2026 00:45:40 +0000 (0:00:00.333) 0:00:13.725 ********* 2026-04-07 00:45:46.250724 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250731 | orchestrator | 2026-04-07 00:45:46.250737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250743 | orchestrator | Tuesday 07 April 2026 00:45:40 +0000 (0:00:00.166) 0:00:13.891 ********* 2026-04-07 00:45:46.250766 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250773 | orchestrator | 2026-04-07 00:45:46.250780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250787 | orchestrator | Tuesday 07 April 2026 00:45:40 +0000 (0:00:00.163) 0:00:14.054 ********* 2026-04-07 00:45:46.250794 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250800 | orchestrator | 2026-04-07 00:45:46.250806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250812 | orchestrator | Tuesday 07 April 2026 00:45:40 +0000 (0:00:00.162) 0:00:14.217 ********* 2026-04-07 00:45:46.250818 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250824 | orchestrator | 2026-04-07 00:45:46.250829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250835 | orchestrator | Tuesday 07 April 2026 00:45:40 +0000 (0:00:00.167) 0:00:14.385 ********* 2026-04-07 00:45:46.250841 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250847 | orchestrator | 2026-04-07 00:45:46.250852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250858 | orchestrator | Tuesday 07 April 2026 00:45:41 +0000 (0:00:00.485) 0:00:14.870 ********* 2026-04-07 00:45:46.250864 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250871 | orchestrator | 2026-04-07 00:45:46.250877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250884 | orchestrator | Tuesday 07 April 2026 00:45:41 +0000 (0:00:00.156) 0:00:15.026 ********* 2026-04-07 00:45:46.250891 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250895 | orchestrator | 2026-04-07 00:45:46.250898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250902 | orchestrator | Tuesday 07 April 2026 00:45:41 +0000 (0:00:00.174) 0:00:15.201 ********* 2026-04-07 00:45:46.250906 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.250910 | orchestrator | 2026-04-07 00:45:46.250914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250917 | orchestrator | Tuesday 07 April 2026 00:45:41 +0000 (0:00:00.171) 0:00:15.372 ********* 2026-04-07 00:45:46.250921 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988) 2026-04-07 00:45:46.250927 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988) 2026-04-07 00:45:46.250931 | orchestrator | 2026-04-07 00:45:46.250935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250938 | orchestrator | Tuesday 07 April 2026 00:45:42 +0000 (0:00:00.398) 0:00:15.770 ********* 2026-04-07 00:45:46.250942 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30) 2026-04-07 00:45:46.250946 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30) 2026-04-07 00:45:46.250950 | orchestrator | 2026-04-07 00:45:46.250953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250957 | orchestrator | Tuesday 07 April 2026 00:45:42 +0000 (0:00:00.381) 0:00:16.152 ********* 2026-04-07 00:45:46.250961 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e) 2026-04-07 00:45:46.250965 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e) 2026-04-07 00:45:46.250969 | orchestrator | 2026-04-07 00:45:46.250973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.250990 | orchestrator | Tuesday 07 April 2026 00:45:42 +0000 (0:00:00.426) 0:00:16.579 ********* 2026-04-07 00:45:46.250994 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39) 2026-04-07 00:45:46.250998 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39) 2026-04-07 00:45:46.251002 | orchestrator | 2026-04-07 00:45:46.251012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:46.251029 | orchestrator | Tuesday 07 April 2026 00:45:43 +0000 (0:00:00.372) 0:00:16.951 ********* 2026-04-07 00:45:46.251033 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:45:46.251038 | orchestrator | 2026-04-07 00:45:46.251042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251047 | orchestrator | Tuesday 07 April 2026 00:45:43 +0000 (0:00:00.287) 0:00:17.239 ********* 2026-04-07 00:45:46.251051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-07 00:45:46.251055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-07 00:45:46.251066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-07 00:45:46.251070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-07 00:45:46.251075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-07 00:45:46.251079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-07 00:45:46.251083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-07 00:45:46.251088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-07 00:45:46.251094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-07 00:45:46.251100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-07 00:45:46.251105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-07 00:45:46.251111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-07 00:45:46.251117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-07 00:45:46.251123 | orchestrator | 2026-04-07 00:45:46.251130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251136 | orchestrator | Tuesday 07 April 2026 00:45:43 +0000 (0:00:00.353) 0:00:17.592 ********* 2026-04-07 00:45:46.251143 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251149 | orchestrator | 2026-04-07 00:45:46.251155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251160 | orchestrator | Tuesday 07 April 2026 00:45:44 +0000 (0:00:00.213) 0:00:17.806 ********* 2026-04-07 00:45:46.251164 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251169 | orchestrator | 2026-04-07 00:45:46.251174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251178 | orchestrator | Tuesday 07 April 2026 00:45:44 +0000 (0:00:00.506) 0:00:18.312 ********* 2026-04-07 00:45:46.251183 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251187 | orchestrator | 2026-04-07 00:45:46.251192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251196 | orchestrator | Tuesday 07 April 2026 00:45:44 +0000 (0:00:00.166) 0:00:18.479 ********* 2026-04-07 00:45:46.251201 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251205 | orchestrator | 2026-04-07 00:45:46.251210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251215 | orchestrator | Tuesday 07 April 2026 00:45:44 +0000 (0:00:00.167) 0:00:18.647 ********* 2026-04-07 00:45:46.251219 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251224 | orchestrator | 2026-04-07 00:45:46.251227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251231 | orchestrator | Tuesday 07 April 2026 00:45:45 +0000 (0:00:00.173) 0:00:18.820 ********* 2026-04-07 00:45:46.251235 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251243 | orchestrator | 2026-04-07 00:45:46.251246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251250 | orchestrator | Tuesday 07 April 2026 00:45:45 +0000 (0:00:00.141) 0:00:18.961 ********* 2026-04-07 00:45:46.251262 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251265 | orchestrator | 2026-04-07 00:45:46.251269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251273 | orchestrator | Tuesday 07 April 2026 00:45:45 +0000 (0:00:00.145) 0:00:19.107 ********* 2026-04-07 00:45:46.251277 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:46.251280 | orchestrator | 2026-04-07 00:45:46.251284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251288 | orchestrator | Tuesday 07 April 2026 00:45:45 +0000 (0:00:00.131) 0:00:19.239 ********* 2026-04-07 00:45:46.251292 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-07 00:45:46.251297 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-07 00:45:46.251301 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-07 00:45:46.251305 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-07 00:45:46.251308 | orchestrator | 2026-04-07 00:45:46.251312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:46.251316 | orchestrator | Tuesday 07 April 2026 00:45:46 +0000 (0:00:00.557) 0:00:19.796 ********* 2026-04-07 00:45:46.251320 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039233 | orchestrator | 2026-04-07 00:45:52.039344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:52.039362 | orchestrator | Tuesday 07 April 2026 00:45:46 +0000 (0:00:00.156) 0:00:19.953 ********* 2026-04-07 00:45:52.039480 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039496 | orchestrator | 2026-04-07 00:45:52.039508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:52.039519 | orchestrator | Tuesday 07 April 2026 00:45:46 +0000 (0:00:00.173) 0:00:20.127 ********* 2026-04-07 00:45:52.039530 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039541 | orchestrator | 2026-04-07 00:45:52.039552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:52.039563 | orchestrator | Tuesday 07 April 2026 00:45:46 +0000 (0:00:00.137) 0:00:20.264 ********* 2026-04-07 00:45:52.039574 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039584 | orchestrator | 2026-04-07 00:45:52.039595 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 00:45:52.039606 | orchestrator | Tuesday 07 April 2026 00:45:46 +0000 (0:00:00.170) 0:00:20.435 ********* 2026-04-07 00:45:52.039617 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-07 00:45:52.039628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-07 00:45:52.039639 | orchestrator | 2026-04-07 00:45:52.039650 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 00:45:52.039680 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.322) 0:00:20.757 ********* 2026-04-07 00:45:52.039692 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039702 | orchestrator | 2026-04-07 00:45:52.039713 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 00:45:52.039724 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.136) 0:00:20.893 ********* 2026-04-07 00:45:52.039735 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039746 | orchestrator | 2026-04-07 00:45:52.039757 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 00:45:52.039773 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.122) 0:00:21.016 ********* 2026-04-07 00:45:52.039784 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039795 | orchestrator | 2026-04-07 00:45:52.039805 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 00:45:52.039816 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.119) 0:00:21.136 ********* 2026-04-07 00:45:52.039849 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:52.039861 | orchestrator | 2026-04-07 00:45:52.039872 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 00:45:52.039883 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.107) 0:00:21.244 ********* 2026-04-07 00:45:52.039895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43d30fb7-a654-5dbf-ba50-28c21932998c'}}) 2026-04-07 00:45:52.039906 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db8a0de8-f58a-5642-89e2-a8dce5d117db'}}) 2026-04-07 00:45:52.039917 | orchestrator | 2026-04-07 00:45:52.039927 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 00:45:52.039938 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.160) 0:00:21.404 ********* 2026-04-07 00:45:52.039950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43d30fb7-a654-5dbf-ba50-28c21932998c'}})  2026-04-07 00:45:52.039963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db8a0de8-f58a-5642-89e2-a8dce5d117db'}})  2026-04-07 00:45:52.039974 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.039984 | orchestrator | 2026-04-07 00:45:52.039995 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 00:45:52.040006 | orchestrator | Tuesday 07 April 2026 00:45:47 +0000 (0:00:00.150) 0:00:21.555 ********* 2026-04-07 00:45:52.040017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43d30fb7-a654-5dbf-ba50-28c21932998c'}})  2026-04-07 00:45:52.040028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db8a0de8-f58a-5642-89e2-a8dce5d117db'}})  2026-04-07 00:45:52.040040 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040051 | orchestrator | 2026-04-07 00:45:52.040061 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 00:45:52.040072 | orchestrator | Tuesday 07 April 2026 00:45:48 +0000 (0:00:00.157) 0:00:21.712 ********* 2026-04-07 00:45:52.040083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43d30fb7-a654-5dbf-ba50-28c21932998c'}})  2026-04-07 00:45:52.040094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db8a0de8-f58a-5642-89e2-a8dce5d117db'}})  2026-04-07 00:45:52.040105 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040115 | orchestrator | 2026-04-07 00:45:52.040126 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 00:45:52.040137 | orchestrator | Tuesday 07 April 2026 00:45:48 +0000 (0:00:00.164) 0:00:21.877 ********* 2026-04-07 00:45:52.040148 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:52.040165 | orchestrator | 2026-04-07 00:45:52.040183 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 00:45:52.040201 | orchestrator | Tuesday 07 April 2026 00:45:48 +0000 (0:00:00.132) 0:00:22.009 ********* 2026-04-07 00:45:52.040219 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:45:52.040237 | orchestrator | 2026-04-07 00:45:52.040255 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 00:45:52.040275 | orchestrator | Tuesday 07 April 2026 00:45:48 +0000 (0:00:00.147) 0:00:22.157 ********* 2026-04-07 00:45:52.040317 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040338 | orchestrator | 2026-04-07 00:45:52.040358 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 00:45:52.040406 | orchestrator | Tuesday 07 April 2026 00:45:48 +0000 (0:00:00.144) 0:00:22.301 ********* 2026-04-07 00:45:52.040418 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040429 | orchestrator | 2026-04-07 00:45:52.040440 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 00:45:52.040451 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.365) 0:00:22.666 ********* 2026-04-07 00:45:52.040461 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040488 | orchestrator | 2026-04-07 00:45:52.040499 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 00:45:52.040510 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.131) 0:00:22.798 ********* 2026-04-07 00:45:52.040521 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:45:52.040531 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:45:52.040542 | orchestrator |  "sdb": { 2026-04-07 00:45:52.040553 | orchestrator |  "osd_lvm_uuid": "43d30fb7-a654-5dbf-ba50-28c21932998c" 2026-04-07 00:45:52.040564 | orchestrator |  }, 2026-04-07 00:45:52.040575 | orchestrator |  "sdc": { 2026-04-07 00:45:52.040586 | orchestrator |  "osd_lvm_uuid": "db8a0de8-f58a-5642-89e2-a8dce5d117db" 2026-04-07 00:45:52.040597 | orchestrator |  } 2026-04-07 00:45:52.040608 | orchestrator |  } 2026-04-07 00:45:52.040619 | orchestrator | } 2026-04-07 00:45:52.040630 | orchestrator | 2026-04-07 00:45:52.040641 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 00:45:52.040652 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.137) 0:00:22.936 ********* 2026-04-07 00:45:52.040662 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040673 | orchestrator | 2026-04-07 00:45:52.040684 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 00:45:52.040695 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.116) 0:00:23.052 ********* 2026-04-07 00:45:52.040705 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040716 | orchestrator | 2026-04-07 00:45:52.040727 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 00:45:52.040738 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.120) 0:00:23.173 ********* 2026-04-07 00:45:52.040748 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:45:52.040759 | orchestrator | 2026-04-07 00:45:52.040770 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 00:45:52.040789 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.142) 0:00:23.315 ********* 2026-04-07 00:45:52.040800 | orchestrator | changed: [testbed-node-4] => { 2026-04-07 00:45:52.040810 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 00:45:52.040821 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:45:52.040832 | orchestrator |  "sdb": { 2026-04-07 00:45:52.040843 | orchestrator |  "osd_lvm_uuid": "43d30fb7-a654-5dbf-ba50-28c21932998c" 2026-04-07 00:45:52.040854 | orchestrator |  }, 2026-04-07 00:45:52.040865 | orchestrator |  "sdc": { 2026-04-07 00:45:52.040875 | orchestrator |  "osd_lvm_uuid": "db8a0de8-f58a-5642-89e2-a8dce5d117db" 2026-04-07 00:45:52.040886 | orchestrator |  } 2026-04-07 00:45:52.040897 | orchestrator |  }, 2026-04-07 00:45:52.040908 | orchestrator |  "lvm_volumes": [ 2026-04-07 00:45:52.040918 | orchestrator |  { 2026-04-07 00:45:52.040929 | orchestrator |  "data": "osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c", 2026-04-07 00:45:52.040940 | orchestrator |  "data_vg": "ceph-43d30fb7-a654-5dbf-ba50-28c21932998c" 2026-04-07 00:45:52.040950 | orchestrator |  }, 2026-04-07 00:45:52.040961 | orchestrator |  { 2026-04-07 00:45:52.040972 | orchestrator |  "data": "osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db", 2026-04-07 00:45:52.040983 | orchestrator |  "data_vg": "ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db" 2026-04-07 00:45:52.040993 | orchestrator |  } 2026-04-07 00:45:52.041004 | orchestrator |  ] 2026-04-07 00:45:52.041015 | orchestrator |  } 2026-04-07 00:45:52.041026 | orchestrator | } 2026-04-07 00:45:52.041036 | orchestrator | 2026-04-07 00:45:52.041048 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 00:45:52.041066 | orchestrator | Tuesday 07 April 2026 00:45:49 +0000 (0:00:00.186) 0:00:23.501 ********* 2026-04-07 00:45:52.041083 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 00:45:52.041101 | orchestrator | 2026-04-07 00:45:52.041129 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 00:45:52.041147 | orchestrator | 2026-04-07 00:45:52.041164 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:45:52.041181 | orchestrator | Tuesday 07 April 2026 00:45:50 +0000 (0:00:01.044) 0:00:24.545 ********* 2026-04-07 00:45:52.041199 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 00:45:52.041217 | orchestrator | 2026-04-07 00:45:52.041236 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:45:52.041255 | orchestrator | Tuesday 07 April 2026 00:45:51 +0000 (0:00:00.373) 0:00:24.919 ********* 2026-04-07 00:45:52.041273 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:45:52.041292 | orchestrator | 2026-04-07 00:45:52.041304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:52.041315 | orchestrator | Tuesday 07 April 2026 00:45:51 +0000 (0:00:00.504) 0:00:25.424 ********* 2026-04-07 00:45:52.041326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-07 00:45:52.041337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-07 00:45:52.041348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-07 00:45:52.041358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-07 00:45:52.041397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-07 00:45:52.041421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-07 00:45:59.940065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-07 00:45:59.940150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-07 00:45:59.940161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-07 00:45:59.940170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-07 00:45:59.940178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-07 00:45:59.940186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-07 00:45:59.940194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-07 00:45:59.940203 | orchestrator | 2026-04-07 00:45:59.940212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940221 | orchestrator | Tuesday 07 April 2026 00:45:52 +0000 (0:00:00.356) 0:00:25.781 ********* 2026-04-07 00:45:59.940230 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940238 | orchestrator | 2026-04-07 00:45:59.940246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940254 | orchestrator | Tuesday 07 April 2026 00:45:52 +0000 (0:00:00.181) 0:00:25.962 ********* 2026-04-07 00:45:59.940262 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940270 | orchestrator | 2026-04-07 00:45:59.940278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940286 | orchestrator | Tuesday 07 April 2026 00:45:52 +0000 (0:00:00.165) 0:00:26.127 ********* 2026-04-07 00:45:59.940294 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940301 | orchestrator | 2026-04-07 00:45:59.940309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940317 | orchestrator | Tuesday 07 April 2026 00:45:52 +0000 (0:00:00.161) 0:00:26.289 ********* 2026-04-07 00:45:59.940325 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940333 | orchestrator | 2026-04-07 00:45:59.940341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940349 | orchestrator | Tuesday 07 April 2026 00:45:52 +0000 (0:00:00.276) 0:00:26.566 ********* 2026-04-07 00:45:59.940437 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940448 | orchestrator | 2026-04-07 00:45:59.940456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940465 | orchestrator | Tuesday 07 April 2026 00:45:53 +0000 (0:00:00.245) 0:00:26.811 ********* 2026-04-07 00:45:59.940473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940482 | orchestrator | 2026-04-07 00:45:59.940490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940498 | orchestrator | Tuesday 07 April 2026 00:45:53 +0000 (0:00:00.184) 0:00:26.996 ********* 2026-04-07 00:45:59.940507 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940515 | orchestrator | 2026-04-07 00:45:59.940524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940532 | orchestrator | Tuesday 07 April 2026 00:45:53 +0000 (0:00:00.164) 0:00:27.160 ********* 2026-04-07 00:45:59.940541 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.940549 | orchestrator | 2026-04-07 00:45:59.940558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940566 | orchestrator | Tuesday 07 April 2026 00:45:53 +0000 (0:00:00.166) 0:00:27.327 ********* 2026-04-07 00:45:59.940575 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff) 2026-04-07 00:45:59.940584 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff) 2026-04-07 00:45:59.940593 | orchestrator | 2026-04-07 00:45:59.940601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940611 | orchestrator | Tuesday 07 April 2026 00:45:54 +0000 (0:00:00.546) 0:00:27.873 ********* 2026-04-07 00:45:59.940637 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d) 2026-04-07 00:45:59.940648 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d) 2026-04-07 00:45:59.940657 | orchestrator | 2026-04-07 00:45:59.940667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940676 | orchestrator | Tuesday 07 April 2026 00:45:55 +0000 (0:00:00.839) 0:00:28.712 ********* 2026-04-07 00:45:59.940687 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe) 2026-04-07 00:45:59.940698 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe) 2026-04-07 00:45:59.940707 | orchestrator | 2026-04-07 00:45:59.940718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940729 | orchestrator | Tuesday 07 April 2026 00:45:55 +0000 (0:00:00.390) 0:00:29.103 ********* 2026-04-07 00:45:59.940739 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c) 2026-04-07 00:45:59.940749 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c) 2026-04-07 00:45:59.940759 | orchestrator | 2026-04-07 00:45:59.940768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:45:59.940778 | orchestrator | Tuesday 07 April 2026 00:45:55 +0000 (0:00:00.478) 0:00:29.581 ********* 2026-04-07 00:45:59.940788 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:45:59.940799 | orchestrator | 2026-04-07 00:45:59.940809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.940836 | orchestrator | Tuesday 07 April 2026 00:45:56 +0000 (0:00:00.315) 0:00:29.896 ********* 2026-04-07 00:45:59.940846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-07 00:45:59.940856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-07 00:45:59.940866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-07 00:45:59.940876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-07 00:45:59.940892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-07 00:45:59.940901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-07 00:45:59.940911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-07 00:45:59.940921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-07 00:45:59.940930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-07 00:45:59.940940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-07 00:45:59.940949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-07 00:45:59.940958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-07 00:45:59.940967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-07 00:45:59.940975 | orchestrator | 2026-04-07 00:45:59.940983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.940991 | orchestrator | Tuesday 07 April 2026 00:45:56 +0000 (0:00:00.371) 0:00:30.268 ********* 2026-04-07 00:45:59.940999 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941007 | orchestrator | 2026-04-07 00:45:59.941027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941035 | orchestrator | Tuesday 07 April 2026 00:45:56 +0000 (0:00:00.207) 0:00:30.476 ********* 2026-04-07 00:45:59.941044 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941052 | orchestrator | 2026-04-07 00:45:59.941060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941069 | orchestrator | Tuesday 07 April 2026 00:45:57 +0000 (0:00:00.183) 0:00:30.659 ********* 2026-04-07 00:45:59.941077 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941085 | orchestrator | 2026-04-07 00:45:59.941094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941102 | orchestrator | Tuesday 07 April 2026 00:45:57 +0000 (0:00:00.195) 0:00:30.855 ********* 2026-04-07 00:45:59.941110 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941119 | orchestrator | 2026-04-07 00:45:59.941127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941136 | orchestrator | Tuesday 07 April 2026 00:45:57 +0000 (0:00:00.188) 0:00:31.043 ********* 2026-04-07 00:45:59.941144 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941152 | orchestrator | 2026-04-07 00:45:59.941160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941169 | orchestrator | Tuesday 07 April 2026 00:45:57 +0000 (0:00:00.259) 0:00:31.303 ********* 2026-04-07 00:45:59.941177 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941186 | orchestrator | 2026-04-07 00:45:59.941194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941202 | orchestrator | Tuesday 07 April 2026 00:45:58 +0000 (0:00:00.554) 0:00:31.857 ********* 2026-04-07 00:45:59.941211 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941219 | orchestrator | 2026-04-07 00:45:59.941228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941236 | orchestrator | Tuesday 07 April 2026 00:45:58 +0000 (0:00:00.207) 0:00:32.064 ********* 2026-04-07 00:45:59.941244 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941253 | orchestrator | 2026-04-07 00:45:59.941261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941269 | orchestrator | Tuesday 07 April 2026 00:45:58 +0000 (0:00:00.170) 0:00:32.235 ********* 2026-04-07 00:45:59.941278 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-07 00:45:59.941291 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-07 00:45:59.941300 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-07 00:45:59.941308 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-07 00:45:59.941316 | orchestrator | 2026-04-07 00:45:59.941325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941333 | orchestrator | Tuesday 07 April 2026 00:45:59 +0000 (0:00:00.606) 0:00:32.842 ********* 2026-04-07 00:45:59.941342 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941349 | orchestrator | 2026-04-07 00:45:59.941357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941365 | orchestrator | Tuesday 07 April 2026 00:45:59 +0000 (0:00:00.184) 0:00:33.026 ********* 2026-04-07 00:45:59.941389 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941398 | orchestrator | 2026-04-07 00:45:59.941406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941414 | orchestrator | Tuesday 07 April 2026 00:45:59 +0000 (0:00:00.185) 0:00:33.212 ********* 2026-04-07 00:45:59.941422 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941430 | orchestrator | 2026-04-07 00:45:59.941438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:45:59.941446 | orchestrator | Tuesday 07 April 2026 00:45:59 +0000 (0:00:00.199) 0:00:33.411 ********* 2026-04-07 00:45:59.941454 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:45:59.941461 | orchestrator | 2026-04-07 00:45:59.941474 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 00:46:03.878941 | orchestrator | Tuesday 07 April 2026 00:45:59 +0000 (0:00:00.174) 0:00:33.586 ********* 2026-04-07 00:46:03.879027 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-07 00:46:03.879034 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-07 00:46:03.879039 | orchestrator | 2026-04-07 00:46:03.879043 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 00:46:03.879047 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.143) 0:00:33.729 ********* 2026-04-07 00:46:03.879051 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879056 | orchestrator | 2026-04-07 00:46:03.879060 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 00:46:03.879064 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.125) 0:00:33.855 ********* 2026-04-07 00:46:03.879079 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879083 | orchestrator | 2026-04-07 00:46:03.879087 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 00:46:03.879091 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.113) 0:00:33.969 ********* 2026-04-07 00:46:03.879095 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879099 | orchestrator | 2026-04-07 00:46:03.879103 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 00:46:03.879107 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.121) 0:00:34.091 ********* 2026-04-07 00:46:03.879111 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:46:03.879116 | orchestrator | 2026-04-07 00:46:03.879120 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 00:46:03.879124 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.264) 0:00:34.355 ********* 2026-04-07 00:46:03.879128 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}}) 2026-04-07 00:46:03.879136 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27d9f8cd-a6eb-5015-929a-744349431582'}}) 2026-04-07 00:46:03.879140 | orchestrator | 2026-04-07 00:46:03.879143 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 00:46:03.879148 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.147) 0:00:34.502 ********* 2026-04-07 00:46:03.879152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}})  2026-04-07 00:46:03.879170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27d9f8cd-a6eb-5015-929a-744349431582'}})  2026-04-07 00:46:03.879174 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879178 | orchestrator | 2026-04-07 00:46:03.879182 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 00:46:03.879185 | orchestrator | Tuesday 07 April 2026 00:46:00 +0000 (0:00:00.137) 0:00:34.640 ********* 2026-04-07 00:46:03.879189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}})  2026-04-07 00:46:03.879193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27d9f8cd-a6eb-5015-929a-744349431582'}})  2026-04-07 00:46:03.879197 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879201 | orchestrator | 2026-04-07 00:46:03.879204 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 00:46:03.879208 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.135) 0:00:34.776 ********* 2026-04-07 00:46:03.879212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}})  2026-04-07 00:46:03.879216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27d9f8cd-a6eb-5015-929a-744349431582'}})  2026-04-07 00:46:03.879220 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879225 | orchestrator | 2026-04-07 00:46:03.879232 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 00:46:03.879240 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.140) 0:00:34.916 ********* 2026-04-07 00:46:03.879249 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:46:03.879255 | orchestrator | 2026-04-07 00:46:03.879261 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 00:46:03.879267 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.124) 0:00:35.041 ********* 2026-04-07 00:46:03.879273 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:46:03.879279 | orchestrator | 2026-04-07 00:46:03.879285 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 00:46:03.879292 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.124) 0:00:35.166 ********* 2026-04-07 00:46:03.879298 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879305 | orchestrator | 2026-04-07 00:46:03.879310 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 00:46:03.879314 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.126) 0:00:35.292 ********* 2026-04-07 00:46:03.879318 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879322 | orchestrator | 2026-04-07 00:46:03.879326 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 00:46:03.879329 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.103) 0:00:35.396 ********* 2026-04-07 00:46:03.879333 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879337 | orchestrator | 2026-04-07 00:46:03.879341 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 00:46:03.879344 | orchestrator | Tuesday 07 April 2026 00:46:01 +0000 (0:00:00.118) 0:00:35.515 ********* 2026-04-07 00:46:03.879348 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:46:03.879352 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:46:03.879356 | orchestrator |  "sdb": { 2026-04-07 00:46:03.879404 | orchestrator |  "osd_lvm_uuid": "959bec69-a72e-5ac6-9cdc-b8ec54ca62e0" 2026-04-07 00:46:03.879409 | orchestrator |  }, 2026-04-07 00:46:03.879413 | orchestrator |  "sdc": { 2026-04-07 00:46:03.879417 | orchestrator |  "osd_lvm_uuid": "27d9f8cd-a6eb-5015-929a-744349431582" 2026-04-07 00:46:03.879421 | orchestrator |  } 2026-04-07 00:46:03.879425 | orchestrator |  } 2026-04-07 00:46:03.879429 | orchestrator | } 2026-04-07 00:46:03.879433 | orchestrator | 2026-04-07 00:46:03.879442 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 00:46:03.879446 | orchestrator | Tuesday 07 April 2026 00:46:02 +0000 (0:00:00.142) 0:00:35.657 ********* 2026-04-07 00:46:03.879450 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879454 | orchestrator | 2026-04-07 00:46:03.879457 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 00:46:03.879461 | orchestrator | Tuesday 07 April 2026 00:46:02 +0000 (0:00:00.137) 0:00:35.795 ********* 2026-04-07 00:46:03.879465 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879469 | orchestrator | 2026-04-07 00:46:03.879472 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 00:46:03.879476 | orchestrator | Tuesday 07 April 2026 00:46:02 +0000 (0:00:00.329) 0:00:36.124 ********* 2026-04-07 00:46:03.879480 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:46:03.879484 | orchestrator | 2026-04-07 00:46:03.879488 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 00:46:03.879491 | orchestrator | Tuesday 07 April 2026 00:46:02 +0000 (0:00:00.144) 0:00:36.269 ********* 2026-04-07 00:46:03.879495 | orchestrator | changed: [testbed-node-5] => { 2026-04-07 00:46:03.879499 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 00:46:03.879503 | orchestrator |  "ceph_osd_devices": { 2026-04-07 00:46:03.879507 | orchestrator |  "sdb": { 2026-04-07 00:46:03.879511 | orchestrator |  "osd_lvm_uuid": "959bec69-a72e-5ac6-9cdc-b8ec54ca62e0" 2026-04-07 00:46:03.879515 | orchestrator |  }, 2026-04-07 00:46:03.879520 | orchestrator |  "sdc": { 2026-04-07 00:46:03.879524 | orchestrator |  "osd_lvm_uuid": "27d9f8cd-a6eb-5015-929a-744349431582" 2026-04-07 00:46:03.879529 | orchestrator |  } 2026-04-07 00:46:03.879533 | orchestrator |  }, 2026-04-07 00:46:03.879537 | orchestrator |  "lvm_volumes": [ 2026-04-07 00:46:03.879542 | orchestrator |  { 2026-04-07 00:46:03.879546 | orchestrator |  "data": "osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0", 2026-04-07 00:46:03.879551 | orchestrator |  "data_vg": "ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0" 2026-04-07 00:46:03.879555 | orchestrator |  }, 2026-04-07 00:46:03.879562 | orchestrator |  { 2026-04-07 00:46:03.879566 | orchestrator |  "data": "osd-block-27d9f8cd-a6eb-5015-929a-744349431582", 2026-04-07 00:46:03.879571 | orchestrator |  "data_vg": "ceph-27d9f8cd-a6eb-5015-929a-744349431582" 2026-04-07 00:46:03.879575 | orchestrator |  } 2026-04-07 00:46:03.879580 | orchestrator |  ] 2026-04-07 00:46:03.879584 | orchestrator |  } 2026-04-07 00:46:03.879588 | orchestrator | } 2026-04-07 00:46:03.879593 | orchestrator | 2026-04-07 00:46:03.879597 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 00:46:03.879602 | orchestrator | Tuesday 07 April 2026 00:46:02 +0000 (0:00:00.220) 0:00:36.489 ********* 2026-04-07 00:46:03.879606 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 00:46:03.879611 | orchestrator | 2026-04-07 00:46:03.879615 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:46:03.879620 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 00:46:03.879626 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 00:46:03.879630 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 00:46:03.879635 | orchestrator | 2026-04-07 00:46:03.879639 | orchestrator | 2026-04-07 00:46:03.879644 | orchestrator | 2026-04-07 00:46:03.879648 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:46:03.879653 | orchestrator | Tuesday 07 April 2026 00:46:03 +0000 (0:00:01.018) 0:00:37.508 ********* 2026-04-07 00:46:03.879662 | orchestrator | =============================================================================== 2026-04-07 00:46:03.879666 | orchestrator | Write configuration file ------------------------------------------------ 3.87s 2026-04-07 00:46:03.879671 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-04-07 00:46:03.879679 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-04-07 00:46:03.879684 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2026-04-07 00:46:03.879688 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-04-07 00:46:03.879692 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2026-04-07 00:46:03.879697 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-04-07 00:46:03.879701 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.64s 2026-04-07 00:46:03.879706 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-04-07 00:46:03.879710 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2026-04-07 00:46:03.879715 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2026-04-07 00:46:03.879719 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-04-07 00:46:03.879724 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-04-07 00:46:03.879732 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.57s 2026-04-07 00:46:04.195221 | orchestrator | Print DB devices -------------------------------------------------------- 0.57s 2026-04-07 00:46:04.195287 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-04-07 00:46:04.195293 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2026-04-07 00:46:04.195297 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-04-07 00:46:04.195302 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-04-07 00:46:04.195306 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.50s 2026-04-07 00:46:25.897011 | orchestrator | 2026-04-07 00:46:25 | INFO  | Task 7c7f0389-b73e-45bb-a073-2e84cf0cdf88 (sync inventory) is running in background. Output coming soon. 2026-04-07 00:46:55.876825 | orchestrator | 2026-04-07 00:46:27 | INFO  | Starting group_vars file reorganization 2026-04-07 00:46:55.876887 | orchestrator | 2026-04-07 00:46:27 | INFO  | Moved 0 file(s) to their respective directories 2026-04-07 00:46:55.876896 | orchestrator | 2026-04-07 00:46:27 | INFO  | Group_vars file reorganization completed 2026-04-07 00:46:55.876903 | orchestrator | 2026-04-07 00:46:29 | INFO  | Starting variable preparation from inventory 2026-04-07 00:46:55.876907 | orchestrator | 2026-04-07 00:46:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-07 00:46:55.876911 | orchestrator | 2026-04-07 00:46:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-07 00:46:55.876923 | orchestrator | 2026-04-07 00:46:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-07 00:46:55.876927 | orchestrator | 2026-04-07 00:46:32 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-07 00:46:55.876931 | orchestrator | 2026-04-07 00:46:32 | INFO  | Variable preparation completed 2026-04-07 00:46:55.876934 | orchestrator | 2026-04-07 00:46:34 | INFO  | Starting inventory overwrite handling 2026-04-07 00:46:55.876938 | orchestrator | 2026-04-07 00:46:34 | INFO  | Handling group overwrites in 99-overwrite 2026-04-07 00:46:55.876941 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removing group frr:children from 60-generic 2026-04-07 00:46:55.876974 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-07 00:46:55.876978 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-07 00:46:55.876981 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-07 00:46:55.876985 | orchestrator | 2026-04-07 00:46:34 | INFO  | Handling group overwrites in 20-roles 2026-04-07 00:46:55.876988 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-07 00:46:55.876991 | orchestrator | 2026-04-07 00:46:34 | INFO  | Removed 5 group(s) in total 2026-04-07 00:46:55.876995 | orchestrator | 2026-04-07 00:46:34 | INFO  | Inventory overwrite handling completed 2026-04-07 00:46:55.876998 | orchestrator | 2026-04-07 00:46:35 | INFO  | Starting merge of inventory files 2026-04-07 00:46:55.877001 | orchestrator | 2026-04-07 00:46:35 | INFO  | Inventory files merged successfully 2026-04-07 00:46:55.877004 | orchestrator | 2026-04-07 00:46:40 | INFO  | Generating minified hosts file 2026-04-07 00:46:55.877008 | orchestrator | 2026-04-07 00:46:42 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-07 00:46:55.877012 | orchestrator | 2026-04-07 00:46:42 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-07 00:46:55.877015 | orchestrator | 2026-04-07 00:46:43 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-07 00:46:55.877019 | orchestrator | 2026-04-07 00:46:54 | INFO  | Successfully wrote ClusterShell configuration 2026-04-07 00:46:55.877022 | orchestrator | [master d81965d] 2026-04-07-00-46 2026-04-07 00:46:55.877027 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-07 00:46:55.877030 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-07 00:46:55.877034 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-07 00:46:55.877037 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-07 00:46:57.676819 | orchestrator | 2026-04-07 00:46:57 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-07 00:46:57.754038 | orchestrator | 2026-04-07 00:46:57 | INFO  | Task 4bb4e703-fd3b-4b98-b120-abaa6e512097 (ceph-create-lvm-devices) was prepared for execution. 2026-04-07 00:46:57.754095 | orchestrator | 2026-04-07 00:46:57 | INFO  | It takes a moment until task 4bb4e703-fd3b-4b98-b120-abaa6e512097 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-07 00:47:12.048099 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 00:47:12.048200 | orchestrator | 2.16.14 2026-04-07 00:47:12.048212 | orchestrator | 2026-04-07 00:47:12.048220 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 00:47:12.048236 | orchestrator | 2026-04-07 00:47:12.048248 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:47:12.048253 | orchestrator | Tuesday 07 April 2026 00:47:03 +0000 (0:00:00.373) 0:00:00.373 ********* 2026-04-07 00:47:12.048257 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 00:47:12.048262 | orchestrator | 2026-04-07 00:47:12.048266 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:47:12.048270 | orchestrator | Tuesday 07 April 2026 00:47:03 +0000 (0:00:00.276) 0:00:00.650 ********* 2026-04-07 00:47:12.048274 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:12.048278 | orchestrator | 2026-04-07 00:47:12.048282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048286 | orchestrator | Tuesday 07 April 2026 00:47:03 +0000 (0:00:00.281) 0:00:00.931 ********* 2026-04-07 00:47:12.048313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-07 00:47:12.048323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-07 00:47:12.048331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-07 00:47:12.048337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-07 00:47:12.048343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-07 00:47:12.048349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-07 00:47:12.048396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-07 00:47:12.048403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-07 00:47:12.048409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-07 00:47:12.048415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-07 00:47:12.048421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-07 00:47:12.048427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-07 00:47:12.048434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-07 00:47:12.048440 | orchestrator | 2026-04-07 00:47:12.048446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048454 | orchestrator | Tuesday 07 April 2026 00:47:04 +0000 (0:00:00.600) 0:00:01.531 ********* 2026-04-07 00:47:12.048458 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048462 | orchestrator | 2026-04-07 00:47:12.048466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048470 | orchestrator | Tuesday 07 April 2026 00:47:05 +0000 (0:00:00.787) 0:00:02.319 ********* 2026-04-07 00:47:12.048474 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048477 | orchestrator | 2026-04-07 00:47:12.048481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048485 | orchestrator | Tuesday 07 April 2026 00:47:05 +0000 (0:00:00.202) 0:00:02.521 ********* 2026-04-07 00:47:12.048502 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048506 | orchestrator | 2026-04-07 00:47:12.048510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048513 | orchestrator | Tuesday 07 April 2026 00:47:05 +0000 (0:00:00.228) 0:00:02.750 ********* 2026-04-07 00:47:12.048517 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048521 | orchestrator | 2026-04-07 00:47:12.048525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048529 | orchestrator | Tuesday 07 April 2026 00:47:05 +0000 (0:00:00.204) 0:00:02.954 ********* 2026-04-07 00:47:12.048532 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048536 | orchestrator | 2026-04-07 00:47:12.048540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048544 | orchestrator | Tuesday 07 April 2026 00:47:05 +0000 (0:00:00.277) 0:00:03.232 ********* 2026-04-07 00:47:12.048547 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048551 | orchestrator | 2026-04-07 00:47:12.048555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048559 | orchestrator | Tuesday 07 April 2026 00:47:06 +0000 (0:00:00.190) 0:00:03.422 ********* 2026-04-07 00:47:12.048563 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048567 | orchestrator | 2026-04-07 00:47:12.048570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048575 | orchestrator | Tuesday 07 April 2026 00:47:06 +0000 (0:00:00.231) 0:00:03.653 ********* 2026-04-07 00:47:12.048578 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048590 | orchestrator | 2026-04-07 00:47:12.048594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048598 | orchestrator | Tuesday 07 April 2026 00:47:06 +0000 (0:00:00.201) 0:00:03.854 ********* 2026-04-07 00:47:12.048602 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1) 2026-04-07 00:47:12.048607 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1) 2026-04-07 00:47:12.048611 | orchestrator | 2026-04-07 00:47:12.048615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048632 | orchestrator | Tuesday 07 April 2026 00:47:07 +0000 (0:00:00.458) 0:00:04.312 ********* 2026-04-07 00:47:12.048637 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706) 2026-04-07 00:47:12.048643 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706) 2026-04-07 00:47:12.048650 | orchestrator | 2026-04-07 00:47:12.048658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048666 | orchestrator | Tuesday 07 April 2026 00:47:07 +0000 (0:00:00.437) 0:00:04.750 ********* 2026-04-07 00:47:12.048675 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4) 2026-04-07 00:47:12.048680 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4) 2026-04-07 00:47:12.048686 | orchestrator | 2026-04-07 00:47:12.048691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048697 | orchestrator | Tuesday 07 April 2026 00:47:08 +0000 (0:00:00.863) 0:00:05.614 ********* 2026-04-07 00:47:12.048703 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73) 2026-04-07 00:47:12.048708 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73) 2026-04-07 00:47:12.048714 | orchestrator | 2026-04-07 00:47:12.048719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:12.048725 | orchestrator | Tuesday 07 April 2026 00:47:09 +0000 (0:00:00.682) 0:00:06.297 ********* 2026-04-07 00:47:12.048730 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:47:12.048736 | orchestrator | 2026-04-07 00:47:12.048742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048752 | orchestrator | Tuesday 07 April 2026 00:47:10 +0000 (0:00:01.085) 0:00:07.383 ********* 2026-04-07 00:47:12.048758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-07 00:47:12.048764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-07 00:47:12.048769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-07 00:47:12.048775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-07 00:47:12.048781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-07 00:47:12.048786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-07 00:47:12.048792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-07 00:47:12.048797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-07 00:47:12.048803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-07 00:47:12.048808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-07 00:47:12.048813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-07 00:47:12.048819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-07 00:47:12.048831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-07 00:47:12.048838 | orchestrator | 2026-04-07 00:47:12.048844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048849 | orchestrator | Tuesday 07 April 2026 00:47:10 +0000 (0:00:00.461) 0:00:07.845 ********* 2026-04-07 00:47:12.048853 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048857 | orchestrator | 2026-04-07 00:47:12.048861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048865 | orchestrator | Tuesday 07 April 2026 00:47:10 +0000 (0:00:00.205) 0:00:08.050 ********* 2026-04-07 00:47:12.048868 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048872 | orchestrator | 2026-04-07 00:47:12.048876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048880 | orchestrator | Tuesday 07 April 2026 00:47:10 +0000 (0:00:00.231) 0:00:08.282 ********* 2026-04-07 00:47:12.048883 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048887 | orchestrator | 2026-04-07 00:47:12.048891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048895 | orchestrator | Tuesday 07 April 2026 00:47:11 +0000 (0:00:00.196) 0:00:08.478 ********* 2026-04-07 00:47:12.048898 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048902 | orchestrator | 2026-04-07 00:47:12.048906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048910 | orchestrator | Tuesday 07 April 2026 00:47:11 +0000 (0:00:00.220) 0:00:08.699 ********* 2026-04-07 00:47:12.048913 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048917 | orchestrator | 2026-04-07 00:47:12.048921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048925 | orchestrator | Tuesday 07 April 2026 00:47:11 +0000 (0:00:00.200) 0:00:08.900 ********* 2026-04-07 00:47:12.048928 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048932 | orchestrator | 2026-04-07 00:47:12.048936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:12.048940 | orchestrator | Tuesday 07 April 2026 00:47:11 +0000 (0:00:00.221) 0:00:09.122 ********* 2026-04-07 00:47:12.048943 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:12.048947 | orchestrator | 2026-04-07 00:47:12.048955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387516 | orchestrator | Tuesday 07 April 2026 00:47:12 +0000 (0:00:00.207) 0:00:09.329 ********* 2026-04-07 00:47:20.387579 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387587 | orchestrator | 2026-04-07 00:47:20.387592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387596 | orchestrator | Tuesday 07 April 2026 00:47:12 +0000 (0:00:00.209) 0:00:09.538 ********* 2026-04-07 00:47:20.387600 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-07 00:47:20.387604 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-07 00:47:20.387608 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-07 00:47:20.387612 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-07 00:47:20.387616 | orchestrator | 2026-04-07 00:47:20.387620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387624 | orchestrator | Tuesday 07 April 2026 00:47:13 +0000 (0:00:01.114) 0:00:10.653 ********* 2026-04-07 00:47:20.387628 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387631 | orchestrator | 2026-04-07 00:47:20.387635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387639 | orchestrator | Tuesday 07 April 2026 00:47:13 +0000 (0:00:00.192) 0:00:10.846 ********* 2026-04-07 00:47:20.387643 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387646 | orchestrator | 2026-04-07 00:47:20.387650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387665 | orchestrator | Tuesday 07 April 2026 00:47:13 +0000 (0:00:00.186) 0:00:11.032 ********* 2026-04-07 00:47:20.387669 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387673 | orchestrator | 2026-04-07 00:47:20.387677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:20.387680 | orchestrator | Tuesday 07 April 2026 00:47:13 +0000 (0:00:00.198) 0:00:11.231 ********* 2026-04-07 00:47:20.387684 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387688 | orchestrator | 2026-04-07 00:47:20.387691 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 00:47:20.387695 | orchestrator | Tuesday 07 April 2026 00:47:14 +0000 (0:00:00.201) 0:00:11.432 ********* 2026-04-07 00:47:20.387699 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387703 | orchestrator | 2026-04-07 00:47:20.387706 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 00:47:20.387710 | orchestrator | Tuesday 07 April 2026 00:47:14 +0000 (0:00:00.141) 0:00:11.574 ********* 2026-04-07 00:47:20.387714 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68f67d56-373d-5470-8a0c-a7bd578cf9eb'}}) 2026-04-07 00:47:20.387718 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}}) 2026-04-07 00:47:20.387722 | orchestrator | 2026-04-07 00:47:20.387726 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 00:47:20.387730 | orchestrator | Tuesday 07 April 2026 00:47:14 +0000 (0:00:00.204) 0:00:11.779 ********* 2026-04-07 00:47:20.387734 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'}) 2026-04-07 00:47:20.387738 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}) 2026-04-07 00:47:20.387742 | orchestrator | 2026-04-07 00:47:20.387746 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 00:47:20.387750 | orchestrator | Tuesday 07 April 2026 00:47:16 +0000 (0:00:02.052) 0:00:13.832 ********* 2026-04-07 00:47:20.387754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387769 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387773 | orchestrator | 2026-04-07 00:47:20.387777 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 00:47:20.387781 | orchestrator | Tuesday 07 April 2026 00:47:16 +0000 (0:00:00.145) 0:00:13.977 ********* 2026-04-07 00:47:20.387784 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'}) 2026-04-07 00:47:20.387788 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}) 2026-04-07 00:47:20.387792 | orchestrator | 2026-04-07 00:47:20.387796 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 00:47:20.387799 | orchestrator | Tuesday 07 April 2026 00:47:18 +0000 (0:00:01.540) 0:00:15.518 ********* 2026-04-07 00:47:20.387803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387811 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387814 | orchestrator | 2026-04-07 00:47:20.387818 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 00:47:20.387828 | orchestrator | Tuesday 07 April 2026 00:47:18 +0000 (0:00:00.157) 0:00:15.676 ********* 2026-04-07 00:47:20.387840 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387844 | orchestrator | 2026-04-07 00:47:20.387848 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 00:47:20.387852 | orchestrator | Tuesday 07 April 2026 00:47:18 +0000 (0:00:00.125) 0:00:15.801 ********* 2026-04-07 00:47:20.387856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387863 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387867 | orchestrator | 2026-04-07 00:47:20.387871 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 00:47:20.387874 | orchestrator | Tuesday 07 April 2026 00:47:18 +0000 (0:00:00.371) 0:00:16.173 ********* 2026-04-07 00:47:20.387878 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387882 | orchestrator | 2026-04-07 00:47:20.387886 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 00:47:20.387889 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.155) 0:00:16.328 ********* 2026-04-07 00:47:20.387893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387901 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387904 | orchestrator | 2026-04-07 00:47:20.387910 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 00:47:20.387914 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.170) 0:00:16.499 ********* 2026-04-07 00:47:20.387918 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387921 | orchestrator | 2026-04-07 00:47:20.387925 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 00:47:20.387929 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.146) 0:00:16.645 ********* 2026-04-07 00:47:20.387933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387940 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387944 | orchestrator | 2026-04-07 00:47:20.387948 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 00:47:20.387951 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.173) 0:00:16.819 ********* 2026-04-07 00:47:20.387955 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:20.387960 | orchestrator | 2026-04-07 00:47:20.387963 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 00:47:20.387967 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.166) 0:00:16.985 ********* 2026-04-07 00:47:20.387971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.387975 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.387978 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.387982 | orchestrator | 2026-04-07 00:47:20.387986 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 00:47:20.387993 | orchestrator | Tuesday 07 April 2026 00:47:19 +0000 (0:00:00.176) 0:00:17.161 ********* 2026-04-07 00:47:20.387996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.388000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.388004 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.388007 | orchestrator | 2026-04-07 00:47:20.388011 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 00:47:20.388015 | orchestrator | Tuesday 07 April 2026 00:47:20 +0000 (0:00:00.196) 0:00:17.358 ********* 2026-04-07 00:47:20.388019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:20.388022 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:20.388026 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.388030 | orchestrator | 2026-04-07 00:47:20.388033 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 00:47:20.388037 | orchestrator | Tuesday 07 April 2026 00:47:20 +0000 (0:00:00.166) 0:00:17.525 ********* 2026-04-07 00:47:20.388041 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:20.388045 | orchestrator | 2026-04-07 00:47:20.388048 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 00:47:20.388054 | orchestrator | Tuesday 07 April 2026 00:47:20 +0000 (0:00:00.140) 0:00:17.665 ********* 2026-04-07 00:47:26.756409 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756511 | orchestrator | 2026-04-07 00:47:26.756523 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 00:47:26.756532 | orchestrator | Tuesday 07 April 2026 00:47:20 +0000 (0:00:00.168) 0:00:17.834 ********* 2026-04-07 00:47:26.756539 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756545 | orchestrator | 2026-04-07 00:47:26.756553 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 00:47:26.756559 | orchestrator | Tuesday 07 April 2026 00:47:20 +0000 (0:00:00.132) 0:00:17.966 ********* 2026-04-07 00:47:26.756566 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:47:26.756575 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 00:47:26.756582 | orchestrator | } 2026-04-07 00:47:26.756588 | orchestrator | 2026-04-07 00:47:26.756595 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 00:47:26.756602 | orchestrator | Tuesday 07 April 2026 00:47:21 +0000 (0:00:00.358) 0:00:18.325 ********* 2026-04-07 00:47:26.756608 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:47:26.756615 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 00:47:26.756622 | orchestrator | } 2026-04-07 00:47:26.756628 | orchestrator | 2026-04-07 00:47:26.756635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 00:47:26.756641 | orchestrator | Tuesday 07 April 2026 00:47:21 +0000 (0:00:00.132) 0:00:18.457 ********* 2026-04-07 00:47:26.756648 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:47:26.756654 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 00:47:26.756661 | orchestrator | } 2026-04-07 00:47:26.756667 | orchestrator | 2026-04-07 00:47:26.756674 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 00:47:26.756680 | orchestrator | Tuesday 07 April 2026 00:47:21 +0000 (0:00:00.125) 0:00:18.582 ********* 2026-04-07 00:47:26.756686 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:26.756692 | orchestrator | 2026-04-07 00:47:26.756699 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 00:47:26.756705 | orchestrator | Tuesday 07 April 2026 00:47:21 +0000 (0:00:00.667) 0:00:19.250 ********* 2026-04-07 00:47:26.756735 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:26.756742 | orchestrator | 2026-04-07 00:47:26.756748 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 00:47:26.756754 | orchestrator | Tuesday 07 April 2026 00:47:22 +0000 (0:00:00.531) 0:00:19.782 ********* 2026-04-07 00:47:26.756761 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:26.756767 | orchestrator | 2026-04-07 00:47:26.756773 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 00:47:26.756780 | orchestrator | Tuesday 07 April 2026 00:47:22 +0000 (0:00:00.464) 0:00:20.246 ********* 2026-04-07 00:47:26.756786 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:26.756792 | orchestrator | 2026-04-07 00:47:26.756798 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 00:47:26.756804 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.133) 0:00:20.380 ********* 2026-04-07 00:47:26.756810 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756816 | orchestrator | 2026-04-07 00:47:26.756823 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 00:47:26.756829 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.088) 0:00:20.468 ********* 2026-04-07 00:47:26.756836 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756842 | orchestrator | 2026-04-07 00:47:26.756848 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 00:47:26.756854 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.082) 0:00:20.550 ********* 2026-04-07 00:47:26.756860 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:47:26.756866 | orchestrator |  "vgs_report": { 2026-04-07 00:47:26.756873 | orchestrator |  "vg": [] 2026-04-07 00:47:26.756879 | orchestrator |  } 2026-04-07 00:47:26.756885 | orchestrator | } 2026-04-07 00:47:26.756891 | orchestrator | 2026-04-07 00:47:26.756902 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 00:47:26.756914 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.125) 0:00:20.676 ********* 2026-04-07 00:47:26.756926 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756938 | orchestrator | 2026-04-07 00:47:26.756950 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 00:47:26.756962 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.131) 0:00:20.808 ********* 2026-04-07 00:47:26.756974 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.756986 | orchestrator | 2026-04-07 00:47:26.756998 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 00:47:26.757011 | orchestrator | Tuesday 07 April 2026 00:47:23 +0000 (0:00:00.138) 0:00:20.946 ********* 2026-04-07 00:47:26.757023 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757033 | orchestrator | 2026-04-07 00:47:26.757039 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 00:47:26.757045 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.350) 0:00:21.296 ********* 2026-04-07 00:47:26.757051 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757057 | orchestrator | 2026-04-07 00:47:26.757063 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 00:47:26.757070 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.128) 0:00:21.425 ********* 2026-04-07 00:47:26.757077 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757083 | orchestrator | 2026-04-07 00:47:26.757089 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 00:47:26.757095 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.140) 0:00:21.565 ********* 2026-04-07 00:47:26.757101 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757107 | orchestrator | 2026-04-07 00:47:26.757114 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 00:47:26.757121 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.134) 0:00:21.700 ********* 2026-04-07 00:47:26.757128 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757142 | orchestrator | 2026-04-07 00:47:26.757148 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 00:47:26.757155 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.131) 0:00:21.831 ********* 2026-04-07 00:47:26.757178 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757186 | orchestrator | 2026-04-07 00:47:26.757209 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 00:47:26.757216 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.131) 0:00:21.962 ********* 2026-04-07 00:47:26.757223 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757230 | orchestrator | 2026-04-07 00:47:26.757236 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 00:47:26.757244 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.143) 0:00:22.106 ********* 2026-04-07 00:47:26.757251 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757258 | orchestrator | 2026-04-07 00:47:26.757264 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 00:47:26.757271 | orchestrator | Tuesday 07 April 2026 00:47:24 +0000 (0:00:00.145) 0:00:22.251 ********* 2026-04-07 00:47:26.757277 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757283 | orchestrator | 2026-04-07 00:47:26.757290 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 00:47:26.757296 | orchestrator | Tuesday 07 April 2026 00:47:25 +0000 (0:00:00.156) 0:00:22.408 ********* 2026-04-07 00:47:26.757303 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757309 | orchestrator | 2026-04-07 00:47:26.757316 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 00:47:26.757323 | orchestrator | Tuesday 07 April 2026 00:47:25 +0000 (0:00:00.154) 0:00:22.563 ********* 2026-04-07 00:47:26.757330 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757336 | orchestrator | 2026-04-07 00:47:26.757343 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 00:47:26.757372 | orchestrator | Tuesday 07 April 2026 00:47:25 +0000 (0:00:00.145) 0:00:22.708 ********* 2026-04-07 00:47:26.757379 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757385 | orchestrator | 2026-04-07 00:47:26.757395 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 00:47:26.757401 | orchestrator | Tuesday 07 April 2026 00:47:25 +0000 (0:00:00.145) 0:00:22.854 ********* 2026-04-07 00:47:26.757409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:26.757417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:26.757423 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757429 | orchestrator | 2026-04-07 00:47:26.757435 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 00:47:26.757441 | orchestrator | Tuesday 07 April 2026 00:47:25 +0000 (0:00:00.202) 0:00:23.056 ********* 2026-04-07 00:47:26.757448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:26.757454 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:26.757460 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757466 | orchestrator | 2026-04-07 00:47:26.757472 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 00:47:26.757479 | orchestrator | Tuesday 07 April 2026 00:47:26 +0000 (0:00:00.412) 0:00:23.468 ********* 2026-04-07 00:47:26.757485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:26.757491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:26.757503 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757509 | orchestrator | 2026-04-07 00:47:26.757515 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 00:47:26.757520 | orchestrator | Tuesday 07 April 2026 00:47:26 +0000 (0:00:00.174) 0:00:23.643 ********* 2026-04-07 00:47:26.757526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:26.757533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:26.757539 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757545 | orchestrator | 2026-04-07 00:47:26.757552 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 00:47:26.757558 | orchestrator | Tuesday 07 April 2026 00:47:26 +0000 (0:00:00.154) 0:00:23.797 ********* 2026-04-07 00:47:26.757565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:26.757571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:26.757578 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:26.757584 | orchestrator | 2026-04-07 00:47:26.757590 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 00:47:26.757597 | orchestrator | Tuesday 07 April 2026 00:47:26 +0000 (0:00:00.158) 0:00:23.956 ********* 2026-04-07 00:47:26.757608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096530 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096536 | orchestrator | 2026-04-07 00:47:32.096542 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 00:47:32.096547 | orchestrator | Tuesday 07 April 2026 00:47:26 +0000 (0:00:00.182) 0:00:24.139 ********* 2026-04-07 00:47:32.096552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096560 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096564 | orchestrator | 2026-04-07 00:47:32.096568 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 00:47:32.096572 | orchestrator | Tuesday 07 April 2026 00:47:27 +0000 (0:00:00.184) 0:00:24.323 ********* 2026-04-07 00:47:32.096576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096595 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096599 | orchestrator | 2026-04-07 00:47:32.096603 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 00:47:32.096607 | orchestrator | Tuesday 07 April 2026 00:47:27 +0000 (0:00:00.157) 0:00:24.480 ********* 2026-04-07 00:47:32.096610 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:32.096615 | orchestrator | 2026-04-07 00:47:32.096631 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 00:47:32.096635 | orchestrator | Tuesday 07 April 2026 00:47:27 +0000 (0:00:00.522) 0:00:25.003 ********* 2026-04-07 00:47:32.096639 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:32.096643 | orchestrator | 2026-04-07 00:47:32.096647 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 00:47:32.096650 | orchestrator | Tuesday 07 April 2026 00:47:28 +0000 (0:00:00.531) 0:00:25.535 ********* 2026-04-07 00:47:32.096654 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:47:32.096658 | orchestrator | 2026-04-07 00:47:32.096662 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 00:47:32.096665 | orchestrator | Tuesday 07 April 2026 00:47:28 +0000 (0:00:00.157) 0:00:25.692 ********* 2026-04-07 00:47:32.096669 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'vg_name': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'}) 2026-04-07 00:47:32.096675 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'vg_name': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}) 2026-04-07 00:47:32.096679 | orchestrator | 2026-04-07 00:47:32.096683 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 00:47:32.096687 | orchestrator | Tuesday 07 April 2026 00:47:28 +0000 (0:00:00.186) 0:00:25.878 ********* 2026-04-07 00:47:32.096691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096698 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096702 | orchestrator | 2026-04-07 00:47:32.096706 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 00:47:32.096710 | orchestrator | Tuesday 07 April 2026 00:47:28 +0000 (0:00:00.146) 0:00:26.025 ********* 2026-04-07 00:47:32.096714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096721 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096725 | orchestrator | 2026-04-07 00:47:32.096729 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 00:47:32.096732 | orchestrator | Tuesday 07 April 2026 00:47:29 +0000 (0:00:00.374) 0:00:26.399 ********* 2026-04-07 00:47:32.096736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'})  2026-04-07 00:47:32.096740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'})  2026-04-07 00:47:32.096744 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:47:32.096748 | orchestrator | 2026-04-07 00:47:32.096751 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 00:47:32.096755 | orchestrator | Tuesday 07 April 2026 00:47:29 +0000 (0:00:00.140) 0:00:26.540 ********* 2026-04-07 00:47:32.096770 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 00:47:32.096774 | orchestrator |  "lvm_report": { 2026-04-07 00:47:32.096778 | orchestrator |  "lv": [ 2026-04-07 00:47:32.096782 | orchestrator |  { 2026-04-07 00:47:32.096786 | orchestrator |  "lv_name": "osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb", 2026-04-07 00:47:32.096790 | orchestrator |  "vg_name": "ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb" 2026-04-07 00:47:32.096794 | orchestrator |  }, 2026-04-07 00:47:32.096802 | orchestrator |  { 2026-04-07 00:47:32.096806 | orchestrator |  "lv_name": "osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d", 2026-04-07 00:47:32.096810 | orchestrator |  "vg_name": "ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d" 2026-04-07 00:47:32.096813 | orchestrator |  } 2026-04-07 00:47:32.096817 | orchestrator |  ], 2026-04-07 00:47:32.096821 | orchestrator |  "pv": [ 2026-04-07 00:47:32.096825 | orchestrator |  { 2026-04-07 00:47:32.096829 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 00:47:32.096833 | orchestrator |  "vg_name": "ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb" 2026-04-07 00:47:32.096836 | orchestrator |  }, 2026-04-07 00:47:32.096840 | orchestrator |  { 2026-04-07 00:47:32.096844 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 00:47:32.096848 | orchestrator |  "vg_name": "ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d" 2026-04-07 00:47:32.096851 | orchestrator |  } 2026-04-07 00:47:32.096855 | orchestrator |  ] 2026-04-07 00:47:32.096859 | orchestrator |  } 2026-04-07 00:47:32.096863 | orchestrator | } 2026-04-07 00:47:32.096867 | orchestrator | 2026-04-07 00:47:32.096871 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 00:47:32.096875 | orchestrator | 2026-04-07 00:47:32.096879 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:47:32.096883 | orchestrator | Tuesday 07 April 2026 00:47:29 +0000 (0:00:00.265) 0:00:26.805 ********* 2026-04-07 00:47:32.096887 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 00:47:32.096891 | orchestrator | 2026-04-07 00:47:32.096895 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:47:32.096899 | orchestrator | Tuesday 07 April 2026 00:47:29 +0000 (0:00:00.237) 0:00:27.043 ********* 2026-04-07 00:47:32.096902 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:32.096906 | orchestrator | 2026-04-07 00:47:32.096910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.096914 | orchestrator | Tuesday 07 April 2026 00:47:29 +0000 (0:00:00.216) 0:00:27.259 ********* 2026-04-07 00:47:32.096918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-07 00:47:32.096921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-07 00:47:32.096925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-07 00:47:32.096929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-07 00:47:32.096933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-07 00:47:32.096936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-07 00:47:32.096940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-07 00:47:32.096944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-07 00:47:32.096948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-07 00:47:32.096956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-07 00:47:32.096960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-07 00:47:32.096963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-07 00:47:32.096967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-07 00:47:32.096971 | orchestrator | 2026-04-07 00:47:32.096975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.096978 | orchestrator | Tuesday 07 April 2026 00:47:30 +0000 (0:00:00.425) 0:00:27.684 ********* 2026-04-07 00:47:32.096982 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.096990 | orchestrator | 2026-04-07 00:47:32.096994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.096999 | orchestrator | Tuesday 07 April 2026 00:47:30 +0000 (0:00:00.190) 0:00:27.875 ********* 2026-04-07 00:47:32.097004 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.097008 | orchestrator | 2026-04-07 00:47:32.097013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.097017 | orchestrator | Tuesday 07 April 2026 00:47:30 +0000 (0:00:00.204) 0:00:28.079 ********* 2026-04-07 00:47:32.097022 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.097027 | orchestrator | 2026-04-07 00:47:32.097031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.097036 | orchestrator | Tuesday 07 April 2026 00:47:30 +0000 (0:00:00.194) 0:00:28.274 ********* 2026-04-07 00:47:32.097040 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.097045 | orchestrator | 2026-04-07 00:47:32.097049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.097054 | orchestrator | Tuesday 07 April 2026 00:47:31 +0000 (0:00:00.675) 0:00:28.950 ********* 2026-04-07 00:47:32.097058 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.097063 | orchestrator | 2026-04-07 00:47:32.097068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:32.097072 | orchestrator | Tuesday 07 April 2026 00:47:31 +0000 (0:00:00.218) 0:00:29.168 ********* 2026-04-07 00:47:32.097077 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:32.097081 | orchestrator | 2026-04-07 00:47:32.097088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476267 | orchestrator | Tuesday 07 April 2026 00:47:32 +0000 (0:00:00.208) 0:00:29.376 ********* 2026-04-07 00:47:42.476318 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476325 | orchestrator | 2026-04-07 00:47:42.476329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476334 | orchestrator | Tuesday 07 April 2026 00:47:32 +0000 (0:00:00.230) 0:00:29.606 ********* 2026-04-07 00:47:42.476338 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476366 | orchestrator | 2026-04-07 00:47:42.476370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476374 | orchestrator | Tuesday 07 April 2026 00:47:32 +0000 (0:00:00.205) 0:00:29.812 ********* 2026-04-07 00:47:42.476379 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988) 2026-04-07 00:47:42.476383 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988) 2026-04-07 00:47:42.476387 | orchestrator | 2026-04-07 00:47:42.476391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476395 | orchestrator | Tuesday 07 April 2026 00:47:32 +0000 (0:00:00.464) 0:00:30.276 ********* 2026-04-07 00:47:42.476399 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30) 2026-04-07 00:47:42.476403 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30) 2026-04-07 00:47:42.476407 | orchestrator | 2026-04-07 00:47:42.476418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476422 | orchestrator | Tuesday 07 April 2026 00:47:33 +0000 (0:00:00.441) 0:00:30.718 ********* 2026-04-07 00:47:42.476426 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e) 2026-04-07 00:47:42.476430 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e) 2026-04-07 00:47:42.476433 | orchestrator | 2026-04-07 00:47:42.476437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476441 | orchestrator | Tuesday 07 April 2026 00:47:33 +0000 (0:00:00.446) 0:00:31.164 ********* 2026-04-07 00:47:42.476445 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39) 2026-04-07 00:47:42.476457 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39) 2026-04-07 00:47:42.476461 | orchestrator | 2026-04-07 00:47:42.476465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:42.476469 | orchestrator | Tuesday 07 April 2026 00:47:34 +0000 (0:00:00.432) 0:00:31.596 ********* 2026-04-07 00:47:42.476473 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:47:42.476477 | orchestrator | 2026-04-07 00:47:42.476480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476484 | orchestrator | Tuesday 07 April 2026 00:47:34 +0000 (0:00:00.328) 0:00:31.925 ********* 2026-04-07 00:47:42.476488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-07 00:47:42.476492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-07 00:47:42.476496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-07 00:47:42.476500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-07 00:47:42.476504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-07 00:47:42.476507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-07 00:47:42.476511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-07 00:47:42.476516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-07 00:47:42.476519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-07 00:47:42.476523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-07 00:47:42.476527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-07 00:47:42.476531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-07 00:47:42.476534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-07 00:47:42.476538 | orchestrator | 2026-04-07 00:47:42.476542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476545 | orchestrator | Tuesday 07 April 2026 00:47:35 +0000 (0:00:00.554) 0:00:32.479 ********* 2026-04-07 00:47:42.476549 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476553 | orchestrator | 2026-04-07 00:47:42.476557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476561 | orchestrator | Tuesday 07 April 2026 00:47:35 +0000 (0:00:00.176) 0:00:32.656 ********* 2026-04-07 00:47:42.476564 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476568 | orchestrator | 2026-04-07 00:47:42.476572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476576 | orchestrator | Tuesday 07 April 2026 00:47:35 +0000 (0:00:00.168) 0:00:32.824 ********* 2026-04-07 00:47:42.476580 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476583 | orchestrator | 2026-04-07 00:47:42.476595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476599 | orchestrator | Tuesday 07 April 2026 00:47:35 +0000 (0:00:00.174) 0:00:32.999 ********* 2026-04-07 00:47:42.476603 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476607 | orchestrator | 2026-04-07 00:47:42.476611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476614 | orchestrator | Tuesday 07 April 2026 00:47:35 +0000 (0:00:00.172) 0:00:33.172 ********* 2026-04-07 00:47:42.476618 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476622 | orchestrator | 2026-04-07 00:47:42.476626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476632 | orchestrator | Tuesday 07 April 2026 00:47:36 +0000 (0:00:00.220) 0:00:33.393 ********* 2026-04-07 00:47:42.476636 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476640 | orchestrator | 2026-04-07 00:47:42.476644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476648 | orchestrator | Tuesday 07 April 2026 00:47:36 +0000 (0:00:00.197) 0:00:33.590 ********* 2026-04-07 00:47:42.476651 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476655 | orchestrator | 2026-04-07 00:47:42.476659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476663 | orchestrator | Tuesday 07 April 2026 00:47:36 +0000 (0:00:00.177) 0:00:33.768 ********* 2026-04-07 00:47:42.476667 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476670 | orchestrator | 2026-04-07 00:47:42.476674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476680 | orchestrator | Tuesday 07 April 2026 00:47:36 +0000 (0:00:00.193) 0:00:33.961 ********* 2026-04-07 00:47:42.476684 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-07 00:47:42.476688 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-07 00:47:42.476691 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-07 00:47:42.476695 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-07 00:47:42.476699 | orchestrator | 2026-04-07 00:47:42.476703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476707 | orchestrator | Tuesday 07 April 2026 00:47:37 +0000 (0:00:00.890) 0:00:34.852 ********* 2026-04-07 00:47:42.476710 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476714 | orchestrator | 2026-04-07 00:47:42.476718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476722 | orchestrator | Tuesday 07 April 2026 00:47:37 +0000 (0:00:00.182) 0:00:35.035 ********* 2026-04-07 00:47:42.476725 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476731 | orchestrator | 2026-04-07 00:47:42.476737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476742 | orchestrator | Tuesday 07 April 2026 00:47:37 +0000 (0:00:00.201) 0:00:35.236 ********* 2026-04-07 00:47:42.476753 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476759 | orchestrator | 2026-04-07 00:47:42.476765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:42.476771 | orchestrator | Tuesday 07 April 2026 00:47:38 +0000 (0:00:00.724) 0:00:35.961 ********* 2026-04-07 00:47:42.476777 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476783 | orchestrator | 2026-04-07 00:47:42.476789 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 00:47:42.476795 | orchestrator | Tuesday 07 April 2026 00:47:38 +0000 (0:00:00.235) 0:00:36.197 ********* 2026-04-07 00:47:42.476802 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476808 | orchestrator | 2026-04-07 00:47:42.476815 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 00:47:42.476820 | orchestrator | Tuesday 07 April 2026 00:47:39 +0000 (0:00:00.147) 0:00:36.344 ********* 2026-04-07 00:47:42.476824 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43d30fb7-a654-5dbf-ba50-28c21932998c'}}) 2026-04-07 00:47:42.476828 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db8a0de8-f58a-5642-89e2-a8dce5d117db'}}) 2026-04-07 00:47:42.476831 | orchestrator | 2026-04-07 00:47:42.476835 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 00:47:42.476839 | orchestrator | Tuesday 07 April 2026 00:47:39 +0000 (0:00:00.213) 0:00:36.558 ********* 2026-04-07 00:47:42.476843 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'}) 2026-04-07 00:47:42.476848 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'}) 2026-04-07 00:47:42.476857 | orchestrator | 2026-04-07 00:47:42.476861 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 00:47:42.476864 | orchestrator | Tuesday 07 April 2026 00:47:41 +0000 (0:00:01.955) 0:00:38.513 ********* 2026-04-07 00:47:42.476868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:42.476873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:42.476877 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:42.476880 | orchestrator | 2026-04-07 00:47:42.476884 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 00:47:42.476888 | orchestrator | Tuesday 07 April 2026 00:47:41 +0000 (0:00:00.152) 0:00:38.666 ********* 2026-04-07 00:47:42.476892 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'}) 2026-04-07 00:47:42.476899 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'}) 2026-04-07 00:47:48.037514 | orchestrator | 2026-04-07 00:47:48.037597 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 00:47:48.037608 | orchestrator | Tuesday 07 April 2026 00:47:42 +0000 (0:00:01.180) 0:00:39.846 ********* 2026-04-07 00:47:48.037615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037638 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037647 | orchestrator | 2026-04-07 00:47:48.037654 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 00:47:48.037662 | orchestrator | Tuesday 07 April 2026 00:47:42 +0000 (0:00:00.197) 0:00:40.044 ********* 2026-04-07 00:47:48.037670 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037677 | orchestrator | 2026-04-07 00:47:48.037685 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 00:47:48.037693 | orchestrator | Tuesday 07 April 2026 00:47:42 +0000 (0:00:00.139) 0:00:40.184 ********* 2026-04-07 00:47:48.037701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037716 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037721 | orchestrator | 2026-04-07 00:47:48.037725 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 00:47:48.037730 | orchestrator | Tuesday 07 April 2026 00:47:43 +0000 (0:00:00.185) 0:00:40.370 ********* 2026-04-07 00:47:48.037735 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037740 | orchestrator | 2026-04-07 00:47:48.037745 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 00:47:48.037749 | orchestrator | Tuesday 07 April 2026 00:47:43 +0000 (0:00:00.137) 0:00:40.507 ********* 2026-04-07 00:47:48.037754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037781 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037786 | orchestrator | 2026-04-07 00:47:48.037790 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 00:47:48.037795 | orchestrator | Tuesday 07 April 2026 00:47:43 +0000 (0:00:00.170) 0:00:40.678 ********* 2026-04-07 00:47:48.037799 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037804 | orchestrator | 2026-04-07 00:47:48.037822 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 00:47:48.037827 | orchestrator | Tuesday 07 April 2026 00:47:43 +0000 (0:00:00.352) 0:00:41.030 ********* 2026-04-07 00:47:48.037832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037844 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037851 | orchestrator | 2026-04-07 00:47:48.037860 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 00:47:48.037871 | orchestrator | Tuesday 07 April 2026 00:47:43 +0000 (0:00:00.172) 0:00:41.203 ********* 2026-04-07 00:47:48.037879 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:48.037888 | orchestrator | 2026-04-07 00:47:48.037894 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 00:47:48.037902 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.151) 0:00:41.354 ********* 2026-04-07 00:47:48.037908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037946 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037954 | orchestrator | 2026-04-07 00:47:48.037962 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 00:47:48.037969 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.152) 0:00:41.507 ********* 2026-04-07 00:47:48.037977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.037984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.037989 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.037995 | orchestrator | 2026-04-07 00:47:48.038001 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 00:47:48.038062 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.140) 0:00:41.647 ********* 2026-04-07 00:47:48.038079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:48.038088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:48.038105 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038113 | orchestrator | 2026-04-07 00:47:48.038121 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 00:47:48.038129 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.132) 0:00:41.780 ********* 2026-04-07 00:47:48.038136 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038142 | orchestrator | 2026-04-07 00:47:48.038147 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 00:47:48.038153 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.129) 0:00:41.910 ********* 2026-04-07 00:47:48.038169 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038182 | orchestrator | 2026-04-07 00:47:48.038238 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 00:47:48.038253 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.120) 0:00:42.031 ********* 2026-04-07 00:47:48.038261 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038267 | orchestrator | 2026-04-07 00:47:48.038272 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 00:47:48.038277 | orchestrator | Tuesday 07 April 2026 00:47:44 +0000 (0:00:00.126) 0:00:42.157 ********* 2026-04-07 00:47:48.038283 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:47:48.038288 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 00:47:48.038294 | orchestrator | } 2026-04-07 00:47:48.038300 | orchestrator | 2026-04-07 00:47:48.038305 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 00:47:48.038311 | orchestrator | Tuesday 07 April 2026 00:47:45 +0000 (0:00:00.151) 0:00:42.309 ********* 2026-04-07 00:47:48.038316 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:47:48.038322 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 00:47:48.038327 | orchestrator | } 2026-04-07 00:47:48.038333 | orchestrator | 2026-04-07 00:47:48.038361 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 00:47:48.038366 | orchestrator | Tuesday 07 April 2026 00:47:45 +0000 (0:00:00.147) 0:00:42.456 ********* 2026-04-07 00:47:48.038371 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:47:48.038379 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 00:47:48.038389 | orchestrator | } 2026-04-07 00:47:48.038400 | orchestrator | 2026-04-07 00:47:48.038407 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 00:47:48.038416 | orchestrator | Tuesday 07 April 2026 00:47:45 +0000 (0:00:00.144) 0:00:42.601 ********* 2026-04-07 00:47:48.038424 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:48.038432 | orchestrator | 2026-04-07 00:47:48.038440 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 00:47:48.038447 | orchestrator | Tuesday 07 April 2026 00:47:46 +0000 (0:00:00.725) 0:00:43.326 ********* 2026-04-07 00:47:48.038454 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:48.038462 | orchestrator | 2026-04-07 00:47:48.038470 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 00:47:48.038478 | orchestrator | Tuesday 07 April 2026 00:47:46 +0000 (0:00:00.578) 0:00:43.905 ********* 2026-04-07 00:47:48.038485 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:48.038493 | orchestrator | 2026-04-07 00:47:48.038500 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 00:47:48.038509 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.522) 0:00:44.427 ********* 2026-04-07 00:47:48.038517 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:48.038525 | orchestrator | 2026-04-07 00:47:48.038533 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 00:47:48.038541 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.120) 0:00:44.548 ********* 2026-04-07 00:47:48.038548 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038555 | orchestrator | 2026-04-07 00:47:48.038563 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 00:47:48.038571 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.094) 0:00:44.642 ********* 2026-04-07 00:47:48.038579 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038587 | orchestrator | 2026-04-07 00:47:48.038595 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 00:47:48.038603 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.084) 0:00:44.726 ********* 2026-04-07 00:47:48.038611 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:47:48.038620 | orchestrator |  "vgs_report": { 2026-04-07 00:47:48.038628 | orchestrator |  "vg": [] 2026-04-07 00:47:48.038635 | orchestrator |  } 2026-04-07 00:47:48.038643 | orchestrator | } 2026-04-07 00:47:48.038661 | orchestrator | 2026-04-07 00:47:48.038669 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 00:47:48.038677 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.129) 0:00:44.855 ********* 2026-04-07 00:47:48.038685 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038692 | orchestrator | 2026-04-07 00:47:48.038700 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 00:47:48.038708 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.107) 0:00:44.963 ********* 2026-04-07 00:47:48.038716 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038723 | orchestrator | 2026-04-07 00:47:48.038731 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 00:47:48.038739 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.112) 0:00:45.076 ********* 2026-04-07 00:47:48.038747 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038755 | orchestrator | 2026-04-07 00:47:48.038762 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 00:47:48.038770 | orchestrator | Tuesday 07 April 2026 00:47:47 +0000 (0:00:00.120) 0:00:45.196 ********* 2026-04-07 00:47:48.038778 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:48.038786 | orchestrator | 2026-04-07 00:47:48.038806 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 00:47:52.377102 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.123) 0:00:45.320 ********* 2026-04-07 00:47:52.377172 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377180 | orchestrator | 2026-04-07 00:47:52.377185 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 00:47:52.377190 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.149) 0:00:45.469 ********* 2026-04-07 00:47:52.377194 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377198 | orchestrator | 2026-04-07 00:47:52.377202 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 00:47:52.377206 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.338) 0:00:45.807 ********* 2026-04-07 00:47:52.377210 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377214 | orchestrator | 2026-04-07 00:47:52.377218 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 00:47:52.377222 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.123) 0:00:45.930 ********* 2026-04-07 00:47:52.377225 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377229 | orchestrator | 2026-04-07 00:47:52.377233 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 00:47:52.377237 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.132) 0:00:46.063 ********* 2026-04-07 00:47:52.377252 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377257 | orchestrator | 2026-04-07 00:47:52.377260 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 00:47:52.377264 | orchestrator | Tuesday 07 April 2026 00:47:48 +0000 (0:00:00.141) 0:00:46.205 ********* 2026-04-07 00:47:52.377268 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377272 | orchestrator | 2026-04-07 00:47:52.377275 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 00:47:52.377279 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.131) 0:00:46.336 ********* 2026-04-07 00:47:52.377283 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377287 | orchestrator | 2026-04-07 00:47:52.377291 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 00:47:52.377295 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.107) 0:00:46.444 ********* 2026-04-07 00:47:52.377298 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377302 | orchestrator | 2026-04-07 00:47:52.377306 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 00:47:52.377310 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.114) 0:00:46.559 ********* 2026-04-07 00:47:52.377314 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377332 | orchestrator | 2026-04-07 00:47:52.377355 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 00:47:52.377359 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.122) 0:00:46.681 ********* 2026-04-07 00:47:52.377363 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377367 | orchestrator | 2026-04-07 00:47:52.377371 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 00:47:52.377374 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.119) 0:00:46.801 ********* 2026-04-07 00:47:52.377380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377389 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377392 | orchestrator | 2026-04-07 00:47:52.377396 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 00:47:52.377400 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.171) 0:00:46.972 ********* 2026-04-07 00:47:52.377404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377411 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377415 | orchestrator | 2026-04-07 00:47:52.377419 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 00:47:52.377423 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.126) 0:00:47.099 ********* 2026-04-07 00:47:52.377426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377434 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377438 | orchestrator | 2026-04-07 00:47:52.377441 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 00:47:52.377445 | orchestrator | Tuesday 07 April 2026 00:47:49 +0000 (0:00:00.132) 0:00:47.231 ********* 2026-04-07 00:47:52.377449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377457 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377461 | orchestrator | 2026-04-07 00:47:52.377476 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 00:47:52.377480 | orchestrator | Tuesday 07 April 2026 00:47:50 +0000 (0:00:00.282) 0:00:47.514 ********* 2026-04-07 00:47:52.377484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377491 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377495 | orchestrator | 2026-04-07 00:47:52.377499 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 00:47:52.377503 | orchestrator | Tuesday 07 April 2026 00:47:50 +0000 (0:00:00.132) 0:00:47.647 ********* 2026-04-07 00:47:52.377512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377520 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377523 | orchestrator | 2026-04-07 00:47:52.377527 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 00:47:52.377531 | orchestrator | Tuesday 07 April 2026 00:47:50 +0000 (0:00:00.158) 0:00:47.806 ********* 2026-04-07 00:47:52.377535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377543 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377547 | orchestrator | 2026-04-07 00:47:52.377550 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 00:47:52.377554 | orchestrator | Tuesday 07 April 2026 00:47:50 +0000 (0:00:00.128) 0:00:47.935 ********* 2026-04-07 00:47:52.377558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377565 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377569 | orchestrator | 2026-04-07 00:47:52.377573 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 00:47:52.377577 | orchestrator | Tuesday 07 April 2026 00:47:50 +0000 (0:00:00.113) 0:00:48.048 ********* 2026-04-07 00:47:52.377580 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:52.377584 | orchestrator | 2026-04-07 00:47:52.377588 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 00:47:52.377592 | orchestrator | Tuesday 07 April 2026 00:47:51 +0000 (0:00:00.551) 0:00:48.599 ********* 2026-04-07 00:47:52.377595 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:52.377599 | orchestrator | 2026-04-07 00:47:52.377603 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 00:47:52.377607 | orchestrator | Tuesday 07 April 2026 00:47:51 +0000 (0:00:00.526) 0:00:49.126 ********* 2026-04-07 00:47:52.377610 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:47:52.377614 | orchestrator | 2026-04-07 00:47:52.377618 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 00:47:52.377622 | orchestrator | Tuesday 07 April 2026 00:47:51 +0000 (0:00:00.144) 0:00:49.270 ********* 2026-04-07 00:47:52.377625 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'vg_name': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'}) 2026-04-07 00:47:52.377631 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'vg_name': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'}) 2026-04-07 00:47:52.377635 | orchestrator | 2026-04-07 00:47:52.377638 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 00:47:52.377642 | orchestrator | Tuesday 07 April 2026 00:47:52 +0000 (0:00:00.162) 0:00:49.432 ********* 2026-04-07 00:47:52.377646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:52.377682 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:52.377690 | orchestrator | 2026-04-07 00:47:52.377694 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 00:47:52.377699 | orchestrator | Tuesday 07 April 2026 00:47:52 +0000 (0:00:00.155) 0:00:49.587 ********* 2026-04-07 00:47:52.377704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:52.377711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:58.572142 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:58.572264 | orchestrator | 2026-04-07 00:47:58.572281 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 00:47:58.572295 | orchestrator | Tuesday 07 April 2026 00:47:52 +0000 (0:00:00.156) 0:00:49.744 ********* 2026-04-07 00:47:58.572307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'})  2026-04-07 00:47:58.572319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'})  2026-04-07 00:47:58.572330 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:47:58.572369 | orchestrator | 2026-04-07 00:47:58.572381 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 00:47:58.572392 | orchestrator | Tuesday 07 April 2026 00:47:52 +0000 (0:00:00.156) 0:00:49.901 ********* 2026-04-07 00:47:58.572403 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 00:47:58.572414 | orchestrator |  "lvm_report": { 2026-04-07 00:47:58.572426 | orchestrator |  "lv": [ 2026-04-07 00:47:58.572453 | orchestrator |  { 2026-04-07 00:47:58.572464 | orchestrator |  "lv_name": "osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c", 2026-04-07 00:47:58.572476 | orchestrator |  "vg_name": "ceph-43d30fb7-a654-5dbf-ba50-28c21932998c" 2026-04-07 00:47:58.572487 | orchestrator |  }, 2026-04-07 00:47:58.572497 | orchestrator |  { 2026-04-07 00:47:58.572508 | orchestrator |  "lv_name": "osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db", 2026-04-07 00:47:58.572519 | orchestrator |  "vg_name": "ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db" 2026-04-07 00:47:58.572530 | orchestrator |  } 2026-04-07 00:47:58.572540 | orchestrator |  ], 2026-04-07 00:47:58.572551 | orchestrator |  "pv": [ 2026-04-07 00:47:58.572562 | orchestrator |  { 2026-04-07 00:47:58.572573 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 00:47:58.572584 | orchestrator |  "vg_name": "ceph-43d30fb7-a654-5dbf-ba50-28c21932998c" 2026-04-07 00:47:58.572594 | orchestrator |  }, 2026-04-07 00:47:58.572605 | orchestrator |  { 2026-04-07 00:47:58.572616 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 00:47:58.572627 | orchestrator |  "vg_name": "ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db" 2026-04-07 00:47:58.572639 | orchestrator |  } 2026-04-07 00:47:58.572650 | orchestrator |  ] 2026-04-07 00:47:58.572663 | orchestrator |  } 2026-04-07 00:47:58.572675 | orchestrator | } 2026-04-07 00:47:58.572688 | orchestrator | 2026-04-07 00:47:58.572700 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 00:47:58.572713 | orchestrator | 2026-04-07 00:47:58.572725 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 00:47:58.572738 | orchestrator | Tuesday 07 April 2026 00:47:53 +0000 (0:00:00.492) 0:00:50.394 ********* 2026-04-07 00:47:58.572752 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 00:47:58.572765 | orchestrator | 2026-04-07 00:47:58.572779 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 00:47:58.572791 | orchestrator | Tuesday 07 April 2026 00:47:53 +0000 (0:00:00.235) 0:00:50.629 ********* 2026-04-07 00:47:58.572826 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:47:58.572839 | orchestrator | 2026-04-07 00:47:58.572852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.572865 | orchestrator | Tuesday 07 April 2026 00:47:53 +0000 (0:00:00.238) 0:00:50.868 ********* 2026-04-07 00:47:58.572877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-07 00:47:58.572890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-07 00:47:58.572902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-07 00:47:58.572919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-07 00:47:58.572931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-07 00:47:58.572943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-07 00:47:58.572955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-07 00:47:58.572968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-07 00:47:58.572981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-07 00:47:58.572993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-07 00:47:58.573006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-07 00:47:58.573018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-07 00:47:58.573029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-07 00:47:58.573040 | orchestrator | 2026-04-07 00:47:58.573051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573061 | orchestrator | Tuesday 07 April 2026 00:47:53 +0000 (0:00:00.415) 0:00:51.284 ********* 2026-04-07 00:47:58.573072 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573083 | orchestrator | 2026-04-07 00:47:58.573093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573104 | orchestrator | Tuesday 07 April 2026 00:47:54 +0000 (0:00:00.235) 0:00:51.519 ********* 2026-04-07 00:47:58.573115 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573125 | orchestrator | 2026-04-07 00:47:58.573136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573165 | orchestrator | Tuesday 07 April 2026 00:47:54 +0000 (0:00:00.235) 0:00:51.754 ********* 2026-04-07 00:47:58.573176 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573187 | orchestrator | 2026-04-07 00:47:58.573198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573209 | orchestrator | Tuesday 07 April 2026 00:47:54 +0000 (0:00:00.215) 0:00:51.969 ********* 2026-04-07 00:47:58.573219 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573230 | orchestrator | 2026-04-07 00:47:58.573241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573252 | orchestrator | Tuesday 07 April 2026 00:47:54 +0000 (0:00:00.193) 0:00:52.163 ********* 2026-04-07 00:47:58.573262 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573273 | orchestrator | 2026-04-07 00:47:58.573283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573294 | orchestrator | Tuesday 07 April 2026 00:47:55 +0000 (0:00:00.205) 0:00:52.368 ********* 2026-04-07 00:47:58.573305 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573316 | orchestrator | 2026-04-07 00:47:58.573327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573361 | orchestrator | Tuesday 07 April 2026 00:47:55 +0000 (0:00:00.672) 0:00:53.040 ********* 2026-04-07 00:47:58.573373 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573392 | orchestrator | 2026-04-07 00:47:58.573403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573414 | orchestrator | Tuesday 07 April 2026 00:47:55 +0000 (0:00:00.198) 0:00:53.239 ********* 2026-04-07 00:47:58.573424 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:47:58.573435 | orchestrator | 2026-04-07 00:47:58.573446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573456 | orchestrator | Tuesday 07 April 2026 00:47:56 +0000 (0:00:00.188) 0:00:53.428 ********* 2026-04-07 00:47:58.573467 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff) 2026-04-07 00:47:58.573479 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff) 2026-04-07 00:47:58.573490 | orchestrator | 2026-04-07 00:47:58.573500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573511 | orchestrator | Tuesday 07 April 2026 00:47:56 +0000 (0:00:00.499) 0:00:53.928 ********* 2026-04-07 00:47:58.573522 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d) 2026-04-07 00:47:58.573532 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d) 2026-04-07 00:47:58.573543 | orchestrator | 2026-04-07 00:47:58.573554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573564 | orchestrator | Tuesday 07 April 2026 00:47:57 +0000 (0:00:00.439) 0:00:54.368 ********* 2026-04-07 00:47:58.573585 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe) 2026-04-07 00:47:58.573596 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe) 2026-04-07 00:47:58.573607 | orchestrator | 2026-04-07 00:47:58.573617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573628 | orchestrator | Tuesday 07 April 2026 00:47:57 +0000 (0:00:00.420) 0:00:54.788 ********* 2026-04-07 00:47:58.573639 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c) 2026-04-07 00:47:58.573649 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c) 2026-04-07 00:47:58.573660 | orchestrator | 2026-04-07 00:47:58.573671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 00:47:58.573682 | orchestrator | Tuesday 07 April 2026 00:47:57 +0000 (0:00:00.413) 0:00:55.201 ********* 2026-04-07 00:47:58.573692 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 00:47:58.573703 | orchestrator | 2026-04-07 00:47:58.573714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:47:58.573725 | orchestrator | Tuesday 07 April 2026 00:47:58 +0000 (0:00:00.329) 0:00:55.531 ********* 2026-04-07 00:47:58.573735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-07 00:47:58.573746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-07 00:47:58.573757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-07 00:47:58.573767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-07 00:47:58.573778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-07 00:47:58.573788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-07 00:47:58.573799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-07 00:47:58.573810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-07 00:47:58.573820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-07 00:47:58.573837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-07 00:47:58.573848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-07 00:47:58.573866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-07 00:48:07.569096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-07 00:48:07.569156 | orchestrator | 2026-04-07 00:48:07.569167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569176 | orchestrator | Tuesday 07 April 2026 00:47:58 +0000 (0:00:00.428) 0:00:55.960 ********* 2026-04-07 00:48:07.569184 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569192 | orchestrator | 2026-04-07 00:48:07.569200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569208 | orchestrator | Tuesday 07 April 2026 00:47:58 +0000 (0:00:00.186) 0:00:56.147 ********* 2026-04-07 00:48:07.569216 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569223 | orchestrator | 2026-04-07 00:48:07.569231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569239 | orchestrator | Tuesday 07 April 2026 00:47:59 +0000 (0:00:00.189) 0:00:56.336 ********* 2026-04-07 00:48:07.569247 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569255 | orchestrator | 2026-04-07 00:48:07.569263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569280 | orchestrator | Tuesday 07 April 2026 00:47:59 +0000 (0:00:00.632) 0:00:56.968 ********* 2026-04-07 00:48:07.569288 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569296 | orchestrator | 2026-04-07 00:48:07.569304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569311 | orchestrator | Tuesday 07 April 2026 00:47:59 +0000 (0:00:00.205) 0:00:57.174 ********* 2026-04-07 00:48:07.569319 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569327 | orchestrator | 2026-04-07 00:48:07.569361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569369 | orchestrator | Tuesday 07 April 2026 00:48:00 +0000 (0:00:00.231) 0:00:57.406 ********* 2026-04-07 00:48:07.569378 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569386 | orchestrator | 2026-04-07 00:48:07.569394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569402 | orchestrator | Tuesday 07 April 2026 00:48:00 +0000 (0:00:00.217) 0:00:57.623 ********* 2026-04-07 00:48:07.569411 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569418 | orchestrator | 2026-04-07 00:48:07.569426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569434 | orchestrator | Tuesday 07 April 2026 00:48:00 +0000 (0:00:00.202) 0:00:57.825 ********* 2026-04-07 00:48:07.569442 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569449 | orchestrator | 2026-04-07 00:48:07.569457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569465 | orchestrator | Tuesday 07 April 2026 00:48:00 +0000 (0:00:00.212) 0:00:58.038 ********* 2026-04-07 00:48:07.569473 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-07 00:48:07.569482 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-07 00:48:07.569490 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-07 00:48:07.569497 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-07 00:48:07.569505 | orchestrator | 2026-04-07 00:48:07.569513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569521 | orchestrator | Tuesday 07 April 2026 00:48:01 +0000 (0:00:00.671) 0:00:58.709 ********* 2026-04-07 00:48:07.569529 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569537 | orchestrator | 2026-04-07 00:48:07.569544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569566 | orchestrator | Tuesday 07 April 2026 00:48:01 +0000 (0:00:00.201) 0:00:58.911 ********* 2026-04-07 00:48:07.569574 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569582 | orchestrator | 2026-04-07 00:48:07.569590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569598 | orchestrator | Tuesday 07 April 2026 00:48:01 +0000 (0:00:00.269) 0:00:59.181 ********* 2026-04-07 00:48:07.569605 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569614 | orchestrator | 2026-04-07 00:48:07.569622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 00:48:07.569630 | orchestrator | Tuesday 07 April 2026 00:48:02 +0000 (0:00:00.226) 0:00:59.407 ********* 2026-04-07 00:48:07.569638 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569646 | orchestrator | 2026-04-07 00:48:07.569653 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 00:48:07.569661 | orchestrator | Tuesday 07 April 2026 00:48:02 +0000 (0:00:00.255) 0:00:59.662 ********* 2026-04-07 00:48:07.569669 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569677 | orchestrator | 2026-04-07 00:48:07.569685 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 00:48:07.569693 | orchestrator | Tuesday 07 April 2026 00:48:02 +0000 (0:00:00.368) 0:01:00.031 ********* 2026-04-07 00:48:07.569701 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}}) 2026-04-07 00:48:07.569710 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27d9f8cd-a6eb-5015-929a-744349431582'}}) 2026-04-07 00:48:07.569717 | orchestrator | 2026-04-07 00:48:07.569725 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 00:48:07.569734 | orchestrator | Tuesday 07 April 2026 00:48:02 +0000 (0:00:00.178) 0:01:00.209 ********* 2026-04-07 00:48:07.569743 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}) 2026-04-07 00:48:07.569752 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'}) 2026-04-07 00:48:07.569761 | orchestrator | 2026-04-07 00:48:07.569769 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 00:48:07.569789 | orchestrator | Tuesday 07 April 2026 00:48:04 +0000 (0:00:01.941) 0:01:02.151 ********* 2026-04-07 00:48:07.569798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:07.569808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:07.569816 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569831 | orchestrator | 2026-04-07 00:48:07.569846 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 00:48:07.569860 | orchestrator | Tuesday 07 April 2026 00:48:05 +0000 (0:00:00.141) 0:01:02.293 ********* 2026-04-07 00:48:07.569874 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}) 2026-04-07 00:48:07.569888 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'}) 2026-04-07 00:48:07.569901 | orchestrator | 2026-04-07 00:48:07.569915 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 00:48:07.569930 | orchestrator | Tuesday 07 April 2026 00:48:06 +0000 (0:00:01.422) 0:01:03.716 ********* 2026-04-07 00:48:07.569940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:07.569953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:07.569961 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.569969 | orchestrator | 2026-04-07 00:48:07.569976 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 00:48:07.569985 | orchestrator | Tuesday 07 April 2026 00:48:06 +0000 (0:00:00.142) 0:01:03.859 ********* 2026-04-07 00:48:07.569992 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570000 | orchestrator | 2026-04-07 00:48:07.570007 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 00:48:07.570051 | orchestrator | Tuesday 07 April 2026 00:48:06 +0000 (0:00:00.115) 0:01:03.974 ********* 2026-04-07 00:48:07.570060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:07.570068 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:07.570076 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570084 | orchestrator | 2026-04-07 00:48:07.570092 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 00:48:07.570100 | orchestrator | Tuesday 07 April 2026 00:48:06 +0000 (0:00:00.143) 0:01:04.117 ********* 2026-04-07 00:48:07.570108 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570116 | orchestrator | 2026-04-07 00:48:07.570124 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 00:48:07.570139 | orchestrator | Tuesday 07 April 2026 00:48:06 +0000 (0:00:00.127) 0:01:04.245 ********* 2026-04-07 00:48:07.570147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:07.570155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:07.570164 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570172 | orchestrator | 2026-04-07 00:48:07.570180 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 00:48:07.570188 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.157) 0:01:04.402 ********* 2026-04-07 00:48:07.570196 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570204 | orchestrator | 2026-04-07 00:48:07.570212 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 00:48:07.570220 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.116) 0:01:04.519 ********* 2026-04-07 00:48:07.570228 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:07.570236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:07.570244 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:07.570252 | orchestrator | 2026-04-07 00:48:07.570260 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 00:48:07.570268 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.151) 0:01:04.670 ********* 2026-04-07 00:48:07.570277 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:07.570285 | orchestrator | 2026-04-07 00:48:07.570293 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 00:48:07.570301 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.123) 0:01:04.794 ********* 2026-04-07 00:48:07.570344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:13.773536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:13.773594 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773603 | orchestrator | 2026-04-07 00:48:13.773611 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 00:48:13.773618 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.290) 0:01:05.085 ********* 2026-04-07 00:48:13.773625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:13.773632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:13.773639 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773646 | orchestrator | 2026-04-07 00:48:13.773661 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 00:48:13.773668 | orchestrator | Tuesday 07 April 2026 00:48:07 +0000 (0:00:00.139) 0:01:05.225 ********* 2026-04-07 00:48:13.773675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:13.773682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:13.773688 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773695 | orchestrator | 2026-04-07 00:48:13.773701 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 00:48:13.773708 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.145) 0:01:05.370 ********* 2026-04-07 00:48:13.773714 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773720 | orchestrator | 2026-04-07 00:48:13.773726 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 00:48:13.773733 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.127) 0:01:05.498 ********* 2026-04-07 00:48:13.773739 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773746 | orchestrator | 2026-04-07 00:48:13.773752 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 00:48:13.773759 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.127) 0:01:05.625 ********* 2026-04-07 00:48:13.773765 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.773772 | orchestrator | 2026-04-07 00:48:13.773779 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 00:48:13.773785 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.148) 0:01:05.774 ********* 2026-04-07 00:48:13.773792 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:48:13.773799 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 00:48:13.773806 | orchestrator | } 2026-04-07 00:48:13.773813 | orchestrator | 2026-04-07 00:48:13.773819 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 00:48:13.773825 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.146) 0:01:05.920 ********* 2026-04-07 00:48:13.773832 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:48:13.773838 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 00:48:13.773845 | orchestrator | } 2026-04-07 00:48:13.773851 | orchestrator | 2026-04-07 00:48:13.773857 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 00:48:13.773864 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.126) 0:01:06.047 ********* 2026-04-07 00:48:13.773870 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:48:13.773877 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 00:48:13.773883 | orchestrator | } 2026-04-07 00:48:13.773889 | orchestrator | 2026-04-07 00:48:13.773896 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 00:48:13.773902 | orchestrator | Tuesday 07 April 2026 00:48:08 +0000 (0:00:00.121) 0:01:06.168 ********* 2026-04-07 00:48:13.773925 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:13.773931 | orchestrator | 2026-04-07 00:48:13.773938 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 00:48:13.773944 | orchestrator | Tuesday 07 April 2026 00:48:09 +0000 (0:00:00.469) 0:01:06.638 ********* 2026-04-07 00:48:13.773950 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:13.773957 | orchestrator | 2026-04-07 00:48:13.773963 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 00:48:13.773969 | orchestrator | Tuesday 07 April 2026 00:48:09 +0000 (0:00:00.490) 0:01:07.128 ********* 2026-04-07 00:48:13.773976 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:13.773982 | orchestrator | 2026-04-07 00:48:13.773989 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 00:48:13.773995 | orchestrator | Tuesday 07 April 2026 00:48:10 +0000 (0:00:00.493) 0:01:07.622 ********* 2026-04-07 00:48:13.774001 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:13.774008 | orchestrator | 2026-04-07 00:48:13.774044 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 00:48:13.774051 | orchestrator | Tuesday 07 April 2026 00:48:10 +0000 (0:00:00.396) 0:01:08.018 ********* 2026-04-07 00:48:13.774057 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774064 | orchestrator | 2026-04-07 00:48:13.774071 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 00:48:13.774077 | orchestrator | Tuesday 07 April 2026 00:48:10 +0000 (0:00:00.110) 0:01:08.128 ********* 2026-04-07 00:48:13.774084 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774090 | orchestrator | 2026-04-07 00:48:13.774097 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 00:48:13.774103 | orchestrator | Tuesday 07 April 2026 00:48:10 +0000 (0:00:00.142) 0:01:08.271 ********* 2026-04-07 00:48:13.774110 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:48:13.774117 | orchestrator |  "vgs_report": { 2026-04-07 00:48:13.774124 | orchestrator |  "vg": [] 2026-04-07 00:48:13.774140 | orchestrator |  } 2026-04-07 00:48:13.774147 | orchestrator | } 2026-04-07 00:48:13.774154 | orchestrator | 2026-04-07 00:48:13.774161 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 00:48:13.774168 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.163) 0:01:08.434 ********* 2026-04-07 00:48:13.774175 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774182 | orchestrator | 2026-04-07 00:48:13.774189 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 00:48:13.774196 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.138) 0:01:08.572 ********* 2026-04-07 00:48:13.774203 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774209 | orchestrator | 2026-04-07 00:48:13.774216 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 00:48:13.774223 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.134) 0:01:08.707 ********* 2026-04-07 00:48:13.774230 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774237 | orchestrator | 2026-04-07 00:48:13.774243 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 00:48:13.774253 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.138) 0:01:08.845 ********* 2026-04-07 00:48:13.774260 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774267 | orchestrator | 2026-04-07 00:48:13.774274 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 00:48:13.774281 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.130) 0:01:08.975 ********* 2026-04-07 00:48:13.774288 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774295 | orchestrator | 2026-04-07 00:48:13.774302 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 00:48:13.774309 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.132) 0:01:09.108 ********* 2026-04-07 00:48:13.774315 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774341 | orchestrator | 2026-04-07 00:48:13.774349 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 00:48:13.774355 | orchestrator | Tuesday 07 April 2026 00:48:11 +0000 (0:00:00.150) 0:01:09.259 ********* 2026-04-07 00:48:13.774362 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774369 | orchestrator | 2026-04-07 00:48:13.774375 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 00:48:13.774383 | orchestrator | Tuesday 07 April 2026 00:48:12 +0000 (0:00:00.137) 0:01:09.396 ********* 2026-04-07 00:48:13.774390 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774397 | orchestrator | 2026-04-07 00:48:13.774404 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 00:48:13.774410 | orchestrator | Tuesday 07 April 2026 00:48:12 +0000 (0:00:00.141) 0:01:09.538 ********* 2026-04-07 00:48:13.774417 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774424 | orchestrator | 2026-04-07 00:48:13.774431 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 00:48:13.774438 | orchestrator | Tuesday 07 April 2026 00:48:12 +0000 (0:00:00.373) 0:01:09.911 ********* 2026-04-07 00:48:13.774444 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774451 | orchestrator | 2026-04-07 00:48:13.774457 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 00:48:13.774464 | orchestrator | Tuesday 07 April 2026 00:48:12 +0000 (0:00:00.118) 0:01:10.030 ********* 2026-04-07 00:48:13.774470 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774476 | orchestrator | 2026-04-07 00:48:13.774483 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 00:48:13.774489 | orchestrator | Tuesday 07 April 2026 00:48:12 +0000 (0:00:00.158) 0:01:10.189 ********* 2026-04-07 00:48:13.774495 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774502 | orchestrator | 2026-04-07 00:48:13.774508 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 00:48:13.774514 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.151) 0:01:10.340 ********* 2026-04-07 00:48:13.774521 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774527 | orchestrator | 2026-04-07 00:48:13.774533 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 00:48:13.774540 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.147) 0:01:10.488 ********* 2026-04-07 00:48:13.774546 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774552 | orchestrator | 2026-04-07 00:48:13.774559 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 00:48:13.774565 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.145) 0:01:10.634 ********* 2026-04-07 00:48:13.774571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:13.774578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:13.774584 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774591 | orchestrator | 2026-04-07 00:48:13.774597 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 00:48:13.774603 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.158) 0:01:10.793 ********* 2026-04-07 00:48:13.774610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:13.774616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:13.774623 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:13.774629 | orchestrator | 2026-04-07 00:48:13.774636 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 00:48:13.774648 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.182) 0:01:10.975 ********* 2026-04-07 00:48:13.774660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.893801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.893857 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.893866 | orchestrator | 2026-04-07 00:48:16.893873 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 00:48:16.893880 | orchestrator | Tuesday 07 April 2026 00:48:13 +0000 (0:00:00.193) 0:01:11.169 ********* 2026-04-07 00:48:16.893887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.893903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.893909 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.893916 | orchestrator | 2026-04-07 00:48:16.893922 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 00:48:16.893929 | orchestrator | Tuesday 07 April 2026 00:48:14 +0000 (0:00:00.156) 0:01:11.325 ********* 2026-04-07 00:48:16.893935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.893942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.893948 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.893955 | orchestrator | 2026-04-07 00:48:16.893961 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 00:48:16.893967 | orchestrator | Tuesday 07 April 2026 00:48:14 +0000 (0:00:00.173) 0:01:11.498 ********* 2026-04-07 00:48:16.893973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.893979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.893986 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.893992 | orchestrator | 2026-04-07 00:48:16.893998 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 00:48:16.894004 | orchestrator | Tuesday 07 April 2026 00:48:14 +0000 (0:00:00.155) 0:01:11.653 ********* 2026-04-07 00:48:16.894010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.894066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.894073 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.894079 | orchestrator | 2026-04-07 00:48:16.894086 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 00:48:16.894092 | orchestrator | Tuesday 07 April 2026 00:48:14 +0000 (0:00:00.375) 0:01:12.029 ********* 2026-04-07 00:48:16.894098 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.894104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.894111 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.894129 | orchestrator | 2026-04-07 00:48:16.894135 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 00:48:16.894141 | orchestrator | Tuesday 07 April 2026 00:48:14 +0000 (0:00:00.154) 0:01:12.184 ********* 2026-04-07 00:48:16.894147 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:16.894155 | orchestrator | 2026-04-07 00:48:16.894161 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 00:48:16.894167 | orchestrator | Tuesday 07 April 2026 00:48:15 +0000 (0:00:00.474) 0:01:12.658 ********* 2026-04-07 00:48:16.894173 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:16.894180 | orchestrator | 2026-04-07 00:48:16.894186 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 00:48:16.894192 | orchestrator | Tuesday 07 April 2026 00:48:15 +0000 (0:00:00.464) 0:01:13.123 ********* 2026-04-07 00:48:16.894198 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:16.894204 | orchestrator | 2026-04-07 00:48:16.894210 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 00:48:16.894216 | orchestrator | Tuesday 07 April 2026 00:48:15 +0000 (0:00:00.155) 0:01:13.278 ********* 2026-04-07 00:48:16.894222 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'vg_name': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'}) 2026-04-07 00:48:16.894229 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'vg_name': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}) 2026-04-07 00:48:16.894236 | orchestrator | 2026-04-07 00:48:16.894242 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 00:48:16.894248 | orchestrator | Tuesday 07 April 2026 00:48:16 +0000 (0:00:00.198) 0:01:13.476 ********* 2026-04-07 00:48:16.894265 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.894271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.894278 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.894284 | orchestrator | 2026-04-07 00:48:16.894290 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 00:48:16.894296 | orchestrator | Tuesday 07 April 2026 00:48:16 +0000 (0:00:00.172) 0:01:13.649 ********* 2026-04-07 00:48:16.894303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.894309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.894315 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.894321 | orchestrator | 2026-04-07 00:48:16.894403 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 00:48:16.894416 | orchestrator | Tuesday 07 April 2026 00:48:16 +0000 (0:00:00.163) 0:01:13.813 ********* 2026-04-07 00:48:16.894423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'})  2026-04-07 00:48:16.894430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'})  2026-04-07 00:48:16.894436 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:16.894442 | orchestrator | 2026-04-07 00:48:16.894449 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 00:48:16.894455 | orchestrator | Tuesday 07 April 2026 00:48:16 +0000 (0:00:00.174) 0:01:13.988 ********* 2026-04-07 00:48:16.894461 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 00:48:16.894468 | orchestrator |  "lvm_report": { 2026-04-07 00:48:16.894474 | orchestrator |  "lv": [ 2026-04-07 00:48:16.894487 | orchestrator |  { 2026-04-07 00:48:16.894494 | orchestrator |  "lv_name": "osd-block-27d9f8cd-a6eb-5015-929a-744349431582", 2026-04-07 00:48:16.894501 | orchestrator |  "vg_name": "ceph-27d9f8cd-a6eb-5015-929a-744349431582" 2026-04-07 00:48:16.894507 | orchestrator |  }, 2026-04-07 00:48:16.894514 | orchestrator |  { 2026-04-07 00:48:16.894520 | orchestrator |  "lv_name": "osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0", 2026-04-07 00:48:16.894527 | orchestrator |  "vg_name": "ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0" 2026-04-07 00:48:16.894533 | orchestrator |  } 2026-04-07 00:48:16.894540 | orchestrator |  ], 2026-04-07 00:48:16.894547 | orchestrator |  "pv": [ 2026-04-07 00:48:16.894554 | orchestrator |  { 2026-04-07 00:48:16.894561 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 00:48:16.894567 | orchestrator |  "vg_name": "ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0" 2026-04-07 00:48:16.894574 | orchestrator |  }, 2026-04-07 00:48:16.894580 | orchestrator |  { 2026-04-07 00:48:16.894586 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 00:48:16.894592 | orchestrator |  "vg_name": "ceph-27d9f8cd-a6eb-5015-929a-744349431582" 2026-04-07 00:48:16.894599 | orchestrator |  } 2026-04-07 00:48:16.894605 | orchestrator |  ] 2026-04-07 00:48:16.894611 | orchestrator |  } 2026-04-07 00:48:16.894617 | orchestrator | } 2026-04-07 00:48:16.894623 | orchestrator | 2026-04-07 00:48:16.894629 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:48:16.894635 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 00:48:16.894642 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 00:48:16.894648 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 00:48:16.894654 | orchestrator | 2026-04-07 00:48:16.894661 | orchestrator | 2026-04-07 00:48:16.894667 | orchestrator | 2026-04-07 00:48:16.894678 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:48:16.894685 | orchestrator | Tuesday 07 April 2026 00:48:16 +0000 (0:00:00.171) 0:01:14.159 ********* 2026-04-07 00:48:16.894691 | orchestrator | =============================================================================== 2026-04-07 00:48:16.894697 | orchestrator | Create block VGs -------------------------------------------------------- 5.95s 2026-04-07 00:48:16.894703 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2026-04-07 00:48:16.894710 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.86s 2026-04-07 00:48:16.894716 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2026-04-07 00:48:16.894722 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-04-07 00:48:16.894728 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2026-04-07 00:48:16.894734 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.48s 2026-04-07 00:48:16.894740 | orchestrator | Add known partitions to the list of available block devices ------------- 1.45s 2026-04-07 00:48:16.894752 | orchestrator | Add known links to the list of available block devices ------------------ 1.44s 2026-04-07 00:48:17.348691 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-04-07 00:48:17.348751 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-04-07 00:48:17.348759 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2026-04-07 00:48:17.348766 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-04-07 00:48:17.348772 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2026-04-07 00:48:17.348795 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-04-07 00:48:17.348801 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2026-04-07 00:48:17.348816 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-04-07 00:48:17.348822 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-04-07 00:48:17.348828 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.72s 2026-04-07 00:48:17.348834 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2026-04-07 00:48:29.154964 | orchestrator | 2026-04-07 00:48:29 | INFO  | Prepare task for execution of facts. 2026-04-07 00:48:29.225625 | orchestrator | 2026-04-07 00:48:29 | INFO  | Task e7c47877-d3f7-47ac-b5c1-1fbe1b82874e (facts) was prepared for execution. 2026-04-07 00:48:29.225668 | orchestrator | 2026-04-07 00:48:29 | INFO  | It takes a moment until task e7c47877-d3f7-47ac-b5c1-1fbe1b82874e (facts) has been started and output is visible here. 2026-04-07 00:48:40.104986 | orchestrator | 2026-04-07 00:48:40.105047 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 00:48:40.105057 | orchestrator | 2026-04-07 00:48:40.105064 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 00:48:40.105071 | orchestrator | Tuesday 07 April 2026 00:48:32 +0000 (0:00:00.332) 0:00:00.332 ********* 2026-04-07 00:48:40.105077 | orchestrator | ok: [testbed-manager] 2026-04-07 00:48:40.105084 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:48:40.105091 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:48:40.105097 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:48:40.105103 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:48:40.105109 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:48:40.105115 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:40.105122 | orchestrator | 2026-04-07 00:48:40.105128 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 00:48:40.105134 | orchestrator | Tuesday 07 April 2026 00:48:33 +0000 (0:00:01.282) 0:00:01.615 ********* 2026-04-07 00:48:40.105139 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:48:40.105146 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:48:40.105153 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:48:40.105159 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:48:40.105166 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:48:40.105172 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:48:40.105178 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:40.105185 | orchestrator | 2026-04-07 00:48:40.105191 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 00:48:40.105197 | orchestrator | 2026-04-07 00:48:40.105204 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 00:48:40.105210 | orchestrator | Tuesday 07 April 2026 00:48:35 +0000 (0:00:01.194) 0:00:02.810 ********* 2026-04-07 00:48:40.105217 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:48:40.105223 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:48:40.105229 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:48:40.105236 | orchestrator | ok: [testbed-manager] 2026-04-07 00:48:40.105242 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:48:40.105248 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:48:40.105254 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:48:40.105260 | orchestrator | 2026-04-07 00:48:40.105267 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 00:48:40.105273 | orchestrator | 2026-04-07 00:48:40.105279 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 00:48:40.105286 | orchestrator | Tuesday 07 April 2026 00:48:39 +0000 (0:00:04.269) 0:00:07.079 ********* 2026-04-07 00:48:40.105292 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:48:40.105299 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:48:40.105389 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:48:40.105397 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:48:40.105404 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:48:40.105410 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:48:40.105416 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:48:40.105422 | orchestrator | 2026-04-07 00:48:40.105428 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:48:40.105435 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105442 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105448 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105454 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105460 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105466 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105472 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:48:40.105479 | orchestrator | 2026-04-07 00:48:40.105485 | orchestrator | 2026-04-07 00:48:40.105492 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:48:40.105498 | orchestrator | Tuesday 07 April 2026 00:48:39 +0000 (0:00:00.503) 0:00:07.582 ********* 2026-04-07 00:48:40.105504 | orchestrator | =============================================================================== 2026-04-07 00:48:40.105510 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.27s 2026-04-07 00:48:40.105516 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2026-04-07 00:48:40.105531 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-04-07 00:48:40.105538 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-07 00:48:51.642977 | orchestrator | 2026-04-07 00:48:51 | INFO  | Prepare task for execution of frr. 2026-04-07 00:48:51.726235 | orchestrator | 2026-04-07 00:48:51 | INFO  | Task 60692c36-b122-4c0a-8a94-d4942c682a3d (frr) was prepared for execution. 2026-04-07 00:48:51.726306 | orchestrator | 2026-04-07 00:48:51 | INFO  | It takes a moment until task 60692c36-b122-4c0a-8a94-d4942c682a3d (frr) has been started and output is visible here. 2026-04-07 00:49:14.715464 | orchestrator | 2026-04-07 00:49:14.715576 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-07 00:49:14.715595 | orchestrator | 2026-04-07 00:49:14.715611 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-07 00:49:14.715625 | orchestrator | Tuesday 07 April 2026 00:48:55 +0000 (0:00:00.309) 0:00:00.309 ********* 2026-04-07 00:49:14.715635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:49:14.715647 | orchestrator | 2026-04-07 00:49:14.715658 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-07 00:49:14.715669 | orchestrator | Tuesday 07 April 2026 00:48:55 +0000 (0:00:00.217) 0:00:00.527 ********* 2026-04-07 00:49:14.715680 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:14.715689 | orchestrator | 2026-04-07 00:49:14.715696 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-07 00:49:14.715723 | orchestrator | Tuesday 07 April 2026 00:48:56 +0000 (0:00:01.413) 0:00:01.940 ********* 2026-04-07 00:49:14.715730 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:14.715736 | orchestrator | 2026-04-07 00:49:14.715742 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-07 00:49:14.715749 | orchestrator | Tuesday 07 April 2026 00:49:05 +0000 (0:00:08.629) 0:00:10.570 ********* 2026-04-07 00:49:14.715755 | orchestrator | ok: [testbed-manager] 2026-04-07 00:49:14.715762 | orchestrator | 2026-04-07 00:49:14.715769 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-07 00:49:14.715778 | orchestrator | Tuesday 07 April 2026 00:49:06 +0000 (0:00:00.889) 0:00:11.459 ********* 2026-04-07 00:49:14.715788 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:14.715797 | orchestrator | 2026-04-07 00:49:14.715810 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-07 00:49:14.715824 | orchestrator | Tuesday 07 April 2026 00:49:07 +0000 (0:00:00.822) 0:00:12.282 ********* 2026-04-07 00:49:14.715833 | orchestrator | ok: [testbed-manager] 2026-04-07 00:49:14.715843 | orchestrator | 2026-04-07 00:49:14.715853 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-07 00:49:14.715864 | orchestrator | Tuesday 07 April 2026 00:49:08 +0000 (0:00:01.107) 0:00:13.389 ********* 2026-04-07 00:49:14.715874 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:49:14.715885 | orchestrator | 2026-04-07 00:49:14.715894 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-07 00:49:14.715901 | orchestrator | Tuesday 07 April 2026 00:49:08 +0000 (0:00:00.147) 0:00:13.537 ********* 2026-04-07 00:49:14.715907 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:49:14.715913 | orchestrator | 2026-04-07 00:49:14.715919 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-07 00:49:14.715926 | orchestrator | Tuesday 07 April 2026 00:49:08 +0000 (0:00:00.236) 0:00:13.773 ********* 2026-04-07 00:49:14.715932 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:49:14.715938 | orchestrator | 2026-04-07 00:49:14.715945 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-07 00:49:14.715952 | orchestrator | Tuesday 07 April 2026 00:49:08 +0000 (0:00:00.155) 0:00:13.928 ********* 2026-04-07 00:49:14.715958 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:49:14.715964 | orchestrator | 2026-04-07 00:49:14.715971 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-07 00:49:14.715977 | orchestrator | Tuesday 07 April 2026 00:49:08 +0000 (0:00:00.131) 0:00:14.060 ********* 2026-04-07 00:49:14.715983 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:49:14.715989 | orchestrator | 2026-04-07 00:49:14.715995 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-07 00:49:14.716001 | orchestrator | Tuesday 07 April 2026 00:49:09 +0000 (0:00:00.145) 0:00:14.205 ********* 2026-04-07 00:49:14.716009 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:14.716016 | orchestrator | 2026-04-07 00:49:14.716023 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-07 00:49:14.716031 | orchestrator | Tuesday 07 April 2026 00:49:09 +0000 (0:00:00.892) 0:00:15.098 ********* 2026-04-07 00:49:14.716038 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-07 00:49:14.716045 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-07 00:49:14.716054 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-07 00:49:14.716062 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-07 00:49:14.716070 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-07 00:49:14.716077 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-07 00:49:14.716091 | orchestrator | 2026-04-07 00:49:14.716098 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-07 00:49:14.716118 | orchestrator | Tuesday 07 April 2026 00:49:12 +0000 (0:00:02.070) 0:00:17.168 ********* 2026-04-07 00:49:14.716126 | orchestrator | ok: [testbed-manager] 2026-04-07 00:49:14.716133 | orchestrator | 2026-04-07 00:49:14.716140 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-07 00:49:14.716147 | orchestrator | Tuesday 07 April 2026 00:49:13 +0000 (0:00:01.136) 0:00:18.305 ********* 2026-04-07 00:49:14.716155 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:14.716162 | orchestrator | 2026-04-07 00:49:14.716169 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:49:14.716176 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 00:49:14.716184 | orchestrator | 2026-04-07 00:49:14.716191 | orchestrator | 2026-04-07 00:49:14.716212 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:49:14.716220 | orchestrator | Tuesday 07 April 2026 00:49:14 +0000 (0:00:01.298) 0:00:19.604 ********* 2026-04-07 00:49:14.716227 | orchestrator | =============================================================================== 2026-04-07 00:49:14.716234 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.63s 2026-04-07 00:49:14.716241 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.07s 2026-04-07 00:49:14.716249 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.41s 2026-04-07 00:49:14.716256 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.30s 2026-04-07 00:49:14.716263 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.14s 2026-04-07 00:49:14.716270 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2026-04-07 00:49:14.716278 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-04-07 00:49:14.716285 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.89s 2026-04-07 00:49:14.716293 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.82s 2026-04-07 00:49:14.716362 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.24s 2026-04-07 00:49:14.716369 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-04-07 00:49:14.716375 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-07 00:49:14.716381 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-07 00:49:14.716387 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-07 00:49:14.716394 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-07 00:49:14.857791 | orchestrator | 2026-04-07 00:49:14.859365 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Apr 7 00:49:14 UTC 2026 2026-04-07 00:49:14.859463 | orchestrator | 2026-04-07 00:49:15.883711 | orchestrator | 2026-04-07 00:49:15 | INFO  | Collection nutshell is prepared for execution 2026-04-07 00:49:15.984275 | orchestrator | 2026-04-07 00:49:15 | INFO  | A [0] - dotfiles 2026-04-07 00:49:26.051931 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - homer 2026-04-07 00:49:26.052027 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - netdata 2026-04-07 00:49:26.052037 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - openstackclient 2026-04-07 00:49:26.052047 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - phpmyadmin 2026-04-07 00:49:26.052061 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - common 2026-04-07 00:49:26.053731 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- loadbalancer 2026-04-07 00:49:26.053776 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [2] --- opensearch 2026-04-07 00:49:26.054053 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [2] --- mariadb-ng 2026-04-07 00:49:26.054174 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [3] ---- horizon 2026-04-07 00:49:26.054187 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [3] ---- keystone 2026-04-07 00:49:26.054201 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- neutron 2026-04-07 00:49:26.054470 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ wait-for-nova 2026-04-07 00:49:26.054676 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [6] ------- octavia 2026-04-07 00:49:26.055899 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- barbican 2026-04-07 00:49:26.055988 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- designate 2026-04-07 00:49:26.055997 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- ironic 2026-04-07 00:49:26.056010 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- placement 2026-04-07 00:49:26.056071 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- magnum 2026-04-07 00:49:26.057468 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- openvswitch 2026-04-07 00:49:26.057507 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [2] --- ovn 2026-04-07 00:49:26.057943 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- memcached 2026-04-07 00:49:26.057966 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- redis 2026-04-07 00:49:26.058153 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- rabbitmq-ng 2026-04-07 00:49:26.058250 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - kubernetes 2026-04-07 00:49:26.060392 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- kubeconfig 2026-04-07 00:49:26.060444 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- copy-kubeconfig 2026-04-07 00:49:26.060458 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [0] - ceph 2026-04-07 00:49:26.062213 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [1] -- ceph-pools 2026-04-07 00:49:26.062359 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [2] --- copy-ceph-keys 2026-04-07 00:49:26.062380 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [3] ---- cephclient 2026-04-07 00:49:26.062460 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-07 00:49:26.062538 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- wait-for-keystone 2026-04-07 00:49:26.062554 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-07 00:49:26.062564 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ glance 2026-04-07 00:49:26.062578 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ cinder 2026-04-07 00:49:26.062590 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ nova 2026-04-07 00:49:26.063147 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [4] ----- prometheus 2026-04-07 00:49:26.063173 | orchestrator | 2026-04-07 00:49:26 | INFO  | A [5] ------ grafana 2026-04-07 00:49:26.238538 | orchestrator | 2026-04-07 00:49:26 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-07 00:49:26.238695 | orchestrator | 2026-04-07 00:49:26 | INFO  | Tasks are running in the background 2026-04-07 00:49:27.854742 | orchestrator | 2026-04-07 00:49:27 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-07 00:49:30.051420 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:30.052566 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:30.052694 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:30.056283 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:30.056919 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:30.058122 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:30.058800 | orchestrator | 2026-04-07 00:49:30 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:30.061988 | orchestrator | 2026-04-07 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:33.100736 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:33.101721 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:33.102974 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:33.104634 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:33.106616 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:33.109737 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:33.114766 | orchestrator | 2026-04-07 00:49:33 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:33.114843 | orchestrator | 2026-04-07 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:36.169900 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:36.169964 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:36.169970 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:36.169975 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:36.169979 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:36.169983 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:36.169987 | orchestrator | 2026-04-07 00:49:36 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:36.169991 | orchestrator | 2026-04-07 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:39.392448 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:39.392530 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:39.396887 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:39.404220 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:39.412830 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:39.412894 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:39.412920 | orchestrator | 2026-04-07 00:49:39 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:39.412928 | orchestrator | 2026-04-07 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:42.469743 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:42.469824 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:42.474543 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:42.475478 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:42.475498 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:42.477205 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:42.478994 | orchestrator | 2026-04-07 00:49:42 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:42.479439 | orchestrator | 2026-04-07 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:45.799008 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:45.799079 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:45.799086 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state STARTED 2026-04-07 00:49:45.799091 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:45.799095 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:45.799099 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:45.799103 | orchestrator | 2026-04-07 00:49:45 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:45.799107 | orchestrator | 2026-04-07 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:48.632618 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:48.636615 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:48.641759 | orchestrator | 2026-04-07 00:49:48.641834 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-07 00:49:48.641841 | orchestrator | 2026-04-07 00:49:48.641845 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-07 00:49:48.641850 | orchestrator | Tuesday 07 April 2026 00:49:35 +0000 (0:00:00.550) 0:00:00.550 ********* 2026-04-07 00:49:48.641854 | orchestrator | changed: [testbed-manager] 2026-04-07 00:49:48.641859 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:49:48.641863 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:49:48.641868 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:49:48.641874 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:49:48.641880 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:49:48.641886 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:49:48.641892 | orchestrator | 2026-04-07 00:49:48.641898 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-07 00:49:48.641904 | orchestrator | Tuesday 07 April 2026 00:49:39 +0000 (0:00:04.437) 0:00:04.988 ********* 2026-04-07 00:49:48.641910 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-07 00:49:48.641937 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-07 00:49:48.641943 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-07 00:49:48.641949 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-07 00:49:48.641961 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-07 00:49:48.641968 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-07 00:49:48.641974 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-07 00:49:48.641980 | orchestrator | 2026-04-07 00:49:48.641986 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-07 00:49:48.641993 | orchestrator | Tuesday 07 April 2026 00:49:41 +0000 (0:00:02.033) 0:00:07.022 ********* 2026-04-07 00:49:48.642001 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.116791', 'end': '2026-04-07 00:49:40.123852', 'delta': '0:00:00.007061', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642010 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.310891', 'end': '2026-04-07 00:49:40.316459', 'delta': '0:00:00.005568', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642048 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.751377', 'end': '2026-04-07 00:49:40.757465', 'delta': '0:00:00.006088', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642077 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.457664', 'end': '2026-04-07 00:49:40.463910', 'delta': '0:00:00.006246', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642100 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.277756', 'end': '2026-04-07 00:49:41.286420', 'delta': '0:00:01.008664', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642108 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.377909', 'end': '2026-04-07 00:49:40.383445', 'delta': '0:00:00.005536', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642115 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-07 00:49:40.912067', 'end': '2026-04-07 00:49:40.917823', 'delta': '0:00:00.005756', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-07 00:49:48.642122 | orchestrator | 2026-04-07 00:49:48.642128 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-07 00:49:48.642134 | orchestrator | Tuesday 07 April 2026 00:49:43 +0000 (0:00:01.772) 0:00:08.794 ********* 2026-04-07 00:49:48.642138 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-07 00:49:48.642141 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-07 00:49:48.642145 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-07 00:49:48.642149 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-07 00:49:48.642152 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-07 00:49:48.642156 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-07 00:49:48.642160 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-07 00:49:48.642163 | orchestrator | 2026-04-07 00:49:48.642167 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-07 00:49:48.642171 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:01.484) 0:00:10.279 ********* 2026-04-07 00:49:48.642174 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-07 00:49:48.642178 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-07 00:49:48.642182 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-07 00:49:48.642186 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-07 00:49:48.642189 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-07 00:49:48.642197 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-07 00:49:48.642201 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-07 00:49:48.642204 | orchestrator | 2026-04-07 00:49:48.642208 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:49:48.642217 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642223 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642226 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642230 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642234 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642240 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642244 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:49:48.642248 | orchestrator | 2026-04-07 00:49:48.642252 | orchestrator | 2026-04-07 00:49:48.642255 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:49:48.642259 | orchestrator | Tuesday 07 April 2026 00:49:47 +0000 (0:00:02.261) 0:00:12.540 ********* 2026-04-07 00:49:48.642263 | orchestrator | =============================================================================== 2026-04-07 00:49:48.642267 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.44s 2026-04-07 00:49:48.642270 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.26s 2026-04-07 00:49:48.642274 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.03s 2026-04-07 00:49:48.642278 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.77s 2026-04-07 00:49:48.642342 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.48s 2026-04-07 00:49:48.642346 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task 87886593-fdfd-499a-975a-c212fc37772d is in state SUCCESS 2026-04-07 00:49:48.643116 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:48.647955 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:48.653962 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:48.656468 | orchestrator | 2026-04-07 00:49:48 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:48.656546 | orchestrator | 2026-04-07 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:51.800761 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:51.802543 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:51.803124 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:51.804717 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:52.062119 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:52.062238 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:49:52.062252 | orchestrator | 2026-04-07 00:49:51 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:52.062261 | orchestrator | 2026-04-07 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:55.001954 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:55.002058 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:55.022167 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:55.022259 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:55.022385 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:55.022424 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:49:55.022440 | orchestrator | 2026-04-07 00:49:54 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:55.022454 | orchestrator | 2026-04-07 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:49:57.989769 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:49:57.991188 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:49:57.991583 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:49:57.994785 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:49:57.994865 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:49:57.996554 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:49:57.996794 | orchestrator | 2026-04-07 00:49:57 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:49:57.997137 | orchestrator | 2026-04-07 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:01.037346 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:01.037855 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:50:01.038244 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:01.040169 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:01.040674 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:01.041358 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:01.043328 | orchestrator | 2026-04-07 00:50:01 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:01.043373 | orchestrator | 2026-04-07 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:04.127059 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:04.127998 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:50:04.129690 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:04.132679 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:04.134171 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:04.141933 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:04.141992 | orchestrator | 2026-04-07 00:50:04 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:04.141999 | orchestrator | 2026-04-07 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:07.264950 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:07.272392 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:50:07.272462 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:07.272471 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:07.272478 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:07.272484 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:07.272490 | orchestrator | 2026-04-07 00:50:07 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:07.272497 | orchestrator | 2026-04-07 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:10.314948 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:10.315014 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:50:10.315019 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:10.315203 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:10.315586 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:10.316178 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:10.316586 | orchestrator | 2026-04-07 00:50:10 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:10.317058 | orchestrator | 2026-04-07 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:13.380048 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:13.380138 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state STARTED 2026-04-07 00:50:13.381179 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:13.382786 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:13.389375 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:13.389996 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:13.391164 | orchestrator | 2026-04-07 00:50:13 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:13.402165 | orchestrator | 2026-04-07 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:16.534088 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:16.537477 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task a8770dc2-1ef3-4095-9adb-2a896bb9152b is in state SUCCESS 2026-04-07 00:50:16.541858 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:16.546129 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:16.552139 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:16.555837 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:16.557927 | orchestrator | 2026-04-07 00:50:16 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:16.557995 | orchestrator | 2026-04-07 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:19.685431 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:19.685512 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:19.689485 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:19.690497 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:19.691208 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:19.692104 | orchestrator | 2026-04-07 00:50:19 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:19.692681 | orchestrator | 2026-04-07 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:22.816456 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:22.816534 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:22.816544 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:22.816784 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:22.816794 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:22.816799 | orchestrator | 2026-04-07 00:50:22 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:22.816805 | orchestrator | 2026-04-07 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:25.825999 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:25.835359 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:25.839790 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:25.845658 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:25.850206 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:25.866944 | orchestrator | 2026-04-07 00:50:25 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state STARTED 2026-04-07 00:50:25.867028 | orchestrator | 2026-04-07 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:28.920090 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:28.921650 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:28.926690 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:28.933158 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:28.933585 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:28.934151 | orchestrator | 2026-04-07 00:50:28 | INFO  | Task 18bfb006-8423-4268-aa6a-2356b91e5b71 is in state SUCCESS 2026-04-07 00:50:28.934191 | orchestrator | 2026-04-07 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:31.981522 | orchestrator | 2026-04-07 00:50:31 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:31.981645 | orchestrator | 2026-04-07 00:50:31 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:31.985330 | orchestrator | 2026-04-07 00:50:31 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:31.989149 | orchestrator | 2026-04-07 00:50:31 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:31.989299 | orchestrator | 2026-04-07 00:50:31 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:31.989324 | orchestrator | 2026-04-07 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:35.108480 | orchestrator | 2026-04-07 00:50:35 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:35.108580 | orchestrator | 2026-04-07 00:50:35 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:35.109823 | orchestrator | 2026-04-07 00:50:35 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:35.113330 | orchestrator | 2026-04-07 00:50:35 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:35.115187 | orchestrator | 2026-04-07 00:50:35 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:35.115671 | orchestrator | 2026-04-07 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:38.189151 | orchestrator | 2026-04-07 00:50:38 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:38.192773 | orchestrator | 2026-04-07 00:50:38 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:38.197805 | orchestrator | 2026-04-07 00:50:38 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:38.202310 | orchestrator | 2026-04-07 00:50:38 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:38.208444 | orchestrator | 2026-04-07 00:50:38 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:38.208550 | orchestrator | 2026-04-07 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:41.266849 | orchestrator | 2026-04-07 00:50:41 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:41.266932 | orchestrator | 2026-04-07 00:50:41 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:41.266942 | orchestrator | 2026-04-07 00:50:41 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:41.266950 | orchestrator | 2026-04-07 00:50:41 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:41.266957 | orchestrator | 2026-04-07 00:50:41 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:41.266965 | orchestrator | 2026-04-07 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:44.330590 | orchestrator | 2026-04-07 00:50:44 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:44.331501 | orchestrator | 2026-04-07 00:50:44 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:44.333183 | orchestrator | 2026-04-07 00:50:44 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:44.334141 | orchestrator | 2026-04-07 00:50:44 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:44.335186 | orchestrator | 2026-04-07 00:50:44 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:44.335212 | orchestrator | 2026-04-07 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:47.388383 | orchestrator | 2026-04-07 00:50:47 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:47.389607 | orchestrator | 2026-04-07 00:50:47 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:47.389639 | orchestrator | 2026-04-07 00:50:47 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:47.400360 | orchestrator | 2026-04-07 00:50:47 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:47.405397 | orchestrator | 2026-04-07 00:50:47 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:47.405438 | orchestrator | 2026-04-07 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:50.450741 | orchestrator | 2026-04-07 00:50:50 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:50.452108 | orchestrator | 2026-04-07 00:50:50 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:50.455036 | orchestrator | 2026-04-07 00:50:50 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:50.455875 | orchestrator | 2026-04-07 00:50:50 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:50.457206 | orchestrator | 2026-04-07 00:50:50 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:50.457357 | orchestrator | 2026-04-07 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:53.488404 | orchestrator | 2026-04-07 00:50:53 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:53.488485 | orchestrator | 2026-04-07 00:50:53 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:53.491068 | orchestrator | 2026-04-07 00:50:53 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:53.491630 | orchestrator | 2026-04-07 00:50:53 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:53.492727 | orchestrator | 2026-04-07 00:50:53 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:53.492772 | orchestrator | 2026-04-07 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:56.552541 | orchestrator | 2026-04-07 00:50:56 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:56.562586 | orchestrator | 2026-04-07 00:50:56 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:56.562686 | orchestrator | 2026-04-07 00:50:56 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:56.562710 | orchestrator | 2026-04-07 00:50:56 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:56.562727 | orchestrator | 2026-04-07 00:50:56 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:56.562745 | orchestrator | 2026-04-07 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:50:59.595770 | orchestrator | 2026-04-07 00:50:59 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:50:59.598323 | orchestrator | 2026-04-07 00:50:59 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:50:59.599888 | orchestrator | 2026-04-07 00:50:59 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:50:59.601228 | orchestrator | 2026-04-07 00:50:59 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:50:59.602209 | orchestrator | 2026-04-07 00:50:59 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state STARTED 2026-04-07 00:50:59.602314 | orchestrator | 2026-04-07 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:02.640339 | orchestrator | 2026-04-07 00:51:02 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:02.642865 | orchestrator | 2026-04-07 00:51:02 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state STARTED 2026-04-07 00:51:02.642929 | orchestrator | 2026-04-07 00:51:02 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:02.643447 | orchestrator | 2026-04-07 00:51:02 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:02.643893 | orchestrator | 2026-04-07 00:51:02 | INFO  | Task 3f1c4590-1de7-4ac1-a980-556cf9a17ee0 is in state SUCCESS 2026-04-07 00:51:02.643940 | orchestrator | 2026-04-07 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:02.644653 | orchestrator | 2026-04-07 00:51:02.644681 | orchestrator | 2026-04-07 00:51:02.644686 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-07 00:51:02.644690 | orchestrator | 2026-04-07 00:51:02.644695 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-07 00:51:02.644700 | orchestrator | Tuesday 07 April 2026 00:49:35 +0000 (0:00:00.709) 0:00:00.709 ********* 2026-04-07 00:51:02.644704 | orchestrator | ok: [testbed-manager] => { 2026-04-07 00:51:02.644710 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-07 00:51:02.644716 | orchestrator | } 2026-04-07 00:51:02.644720 | orchestrator | 2026-04-07 00:51:02.644725 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-07 00:51:02.644729 | orchestrator | Tuesday 07 April 2026 00:49:36 +0000 (0:00:00.551) 0:00:01.261 ********* 2026-04-07 00:51:02.644733 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.644738 | orchestrator | 2026-04-07 00:51:02.644755 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-07 00:51:02.644759 | orchestrator | Tuesday 07 April 2026 00:49:38 +0000 (0:00:01.655) 0:00:02.916 ********* 2026-04-07 00:51:02.644763 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-07 00:51:02.644767 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-07 00:51:02.644772 | orchestrator | 2026-04-07 00:51:02.644776 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-07 00:51:02.644784 | orchestrator | Tuesday 07 April 2026 00:49:40 +0000 (0:00:02.206) 0:00:05.122 ********* 2026-04-07 00:51:02.644788 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.644792 | orchestrator | 2026-04-07 00:51:02.644796 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-07 00:51:02.644799 | orchestrator | Tuesday 07 April 2026 00:49:42 +0000 (0:00:02.074) 0:00:07.197 ********* 2026-04-07 00:51:02.644803 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.644807 | orchestrator | 2026-04-07 00:51:02.644811 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-07 00:51:02.644815 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:01.754) 0:00:08.951 ********* 2026-04-07 00:51:02.644819 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-07 00:51:02.644822 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.644826 | orchestrator | 2026-04-07 00:51:02.644830 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-07 00:51:02.644834 | orchestrator | Tuesday 07 April 2026 00:50:10 +0000 (0:00:26.619) 0:00:35.571 ********* 2026-04-07 00:51:02.644838 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.644842 | orchestrator | 2026-04-07 00:51:02.644845 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:51:02.644849 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:02.644855 | orchestrator | 2026-04-07 00:51:02.644859 | orchestrator | 2026-04-07 00:51:02.644863 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:51:02.644867 | orchestrator | Tuesday 07 April 2026 00:50:13 +0000 (0:00:02.902) 0:00:38.473 ********* 2026-04-07 00:51:02.644870 | orchestrator | =============================================================================== 2026-04-07 00:51:02.644874 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.62s 2026-04-07 00:51:02.644878 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.90s 2026-04-07 00:51:02.644882 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.21s 2026-04-07 00:51:02.644886 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.07s 2026-04-07 00:51:02.644889 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.75s 2026-04-07 00:51:02.644893 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.66s 2026-04-07 00:51:02.644897 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.55s 2026-04-07 00:51:02.644902 | orchestrator | 2026-04-07 00:51:02.644908 | orchestrator | 2026-04-07 00:51:02.644913 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-07 00:51:02.644919 | orchestrator | 2026-04-07 00:51:02.644928 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-07 00:51:02.644936 | orchestrator | Tuesday 07 April 2026 00:49:34 +0000 (0:00:00.418) 0:00:00.418 ********* 2026-04-07 00:51:02.644942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-07 00:51:02.644949 | orchestrator | 2026-04-07 00:51:02.644955 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-07 00:51:02.644960 | orchestrator | Tuesday 07 April 2026 00:49:35 +0000 (0:00:00.550) 0:00:00.968 ********* 2026-04-07 00:51:02.644971 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-07 00:51:02.644977 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-07 00:51:02.644984 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-07 00:51:02.644990 | orchestrator | 2026-04-07 00:51:02.644996 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-07 00:51:02.645003 | orchestrator | Tuesday 07 April 2026 00:49:38 +0000 (0:00:03.283) 0:00:04.253 ********* 2026-04-07 00:51:02.645009 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645015 | orchestrator | 2026-04-07 00:51:02.645021 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-07 00:51:02.645027 | orchestrator | Tuesday 07 April 2026 00:49:41 +0000 (0:00:02.421) 0:00:06.674 ********* 2026-04-07 00:51:02.645042 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-07 00:51:02.645048 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.645054 | orchestrator | 2026-04-07 00:51:02.645060 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-07 00:51:02.645066 | orchestrator | Tuesday 07 April 2026 00:50:17 +0000 (0:00:36.395) 0:00:43.069 ********* 2026-04-07 00:51:02.645073 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645079 | orchestrator | 2026-04-07 00:51:02.645085 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-07 00:51:02.645091 | orchestrator | Tuesday 07 April 2026 00:50:19 +0000 (0:00:01.617) 0:00:44.686 ********* 2026-04-07 00:51:02.645097 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.645103 | orchestrator | 2026-04-07 00:51:02.645110 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-07 00:51:02.645116 | orchestrator | Tuesday 07 April 2026 00:50:19 +0000 (0:00:00.803) 0:00:45.490 ********* 2026-04-07 00:51:02.645122 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645128 | orchestrator | 2026-04-07 00:51:02.645134 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-07 00:51:02.645140 | orchestrator | Tuesday 07 April 2026 00:50:22 +0000 (0:00:02.369) 0:00:47.859 ********* 2026-04-07 00:51:02.645146 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645152 | orchestrator | 2026-04-07 00:51:02.645158 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-07 00:51:02.645164 | orchestrator | Tuesday 07 April 2026 00:50:23 +0000 (0:00:01.160) 0:00:49.020 ********* 2026-04-07 00:51:02.645174 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645180 | orchestrator | 2026-04-07 00:51:02.645185 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-07 00:51:02.645191 | orchestrator | Tuesday 07 April 2026 00:50:24 +0000 (0:00:01.416) 0:00:50.436 ********* 2026-04-07 00:51:02.645197 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.645202 | orchestrator | 2026-04-07 00:51:02.645208 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:51:02.645214 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:02.645220 | orchestrator | 2026-04-07 00:51:02.645226 | orchestrator | 2026-04-07 00:51:02.645233 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:51:02.645262 | orchestrator | Tuesday 07 April 2026 00:50:25 +0000 (0:00:00.740) 0:00:51.177 ********* 2026-04-07 00:51:02.645270 | orchestrator | =============================================================================== 2026-04-07 00:51:02.645276 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.40s 2026-04-07 00:51:02.645282 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.28s 2026-04-07 00:51:02.645288 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.42s 2026-04-07 00:51:02.645294 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.37s 2026-04-07 00:51:02.645307 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.62s 2026-04-07 00:51:02.645313 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.42s 2026-04-07 00:51:02.645319 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.16s 2026-04-07 00:51:02.645326 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.80s 2026-04-07 00:51:02.645332 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.74s 2026-04-07 00:51:02.645339 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.55s 2026-04-07 00:51:02.645346 | orchestrator | 2026-04-07 00:51:02.645352 | orchestrator | 2026-04-07 00:51:02.645358 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-07 00:51:02.645364 | orchestrator | 2026-04-07 00:51:02.645370 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-07 00:51:02.645377 | orchestrator | Tuesday 07 April 2026 00:49:53 +0000 (0:00:00.267) 0:00:00.267 ********* 2026-04-07 00:51:02.645383 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.645389 | orchestrator | 2026-04-07 00:51:02.645395 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-07 00:51:02.645401 | orchestrator | Tuesday 07 April 2026 00:49:56 +0000 (0:00:02.750) 0:00:03.017 ********* 2026-04-07 00:51:02.645407 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-07 00:51:02.645413 | orchestrator | 2026-04-07 00:51:02.645419 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-07 00:51:02.645425 | orchestrator | Tuesday 07 April 2026 00:49:57 +0000 (0:00:01.380) 0:00:04.398 ********* 2026-04-07 00:51:02.645431 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645437 | orchestrator | 2026-04-07 00:51:02.645444 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-07 00:51:02.645451 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:01.000) 0:00:05.399 ********* 2026-04-07 00:51:02.645458 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-07 00:51:02.645464 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:02.645471 | orchestrator | 2026-04-07 00:51:02.645478 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-07 00:51:02.645484 | orchestrator | Tuesday 07 April 2026 00:50:54 +0000 (0:00:55.401) 0:01:00.800 ********* 2026-04-07 00:51:02.645490 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:02.645496 | orchestrator | 2026-04-07 00:51:02.645502 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:51:02.645508 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:02.645515 | orchestrator | 2026-04-07 00:51:02.645521 | orchestrator | 2026-04-07 00:51:02.645528 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:51:02.645541 | orchestrator | Tuesday 07 April 2026 00:50:59 +0000 (0:00:05.612) 0:01:06.413 ********* 2026-04-07 00:51:02.645548 | orchestrator | =============================================================================== 2026-04-07 00:51:02.645554 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.40s 2026-04-07 00:51:02.645560 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.61s 2026-04-07 00:51:02.645567 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.75s 2026-04-07 00:51:02.645573 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.38s 2026-04-07 00:51:02.645579 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.00s 2026-04-07 00:51:05.682592 | orchestrator | 2026-04-07 00:51:05 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:05.684183 | orchestrator | 2026-04-07 00:51:05 | INFO  | Task 5cdfffd8-0e4e-4a45-872c-82e1b987ea76 is in state SUCCESS 2026-04-07 00:51:05.685192 | orchestrator | 2026-04-07 00:51:05.685225 | orchestrator | 2026-04-07 00:51:05.685233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:51:05.685259 | orchestrator | 2026-04-07 00:51:05.685265 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:51:05.685272 | orchestrator | Tuesday 07 April 2026 00:49:35 +0000 (0:00:00.714) 0:00:00.714 ********* 2026-04-07 00:51:05.685286 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-07 00:51:05.685292 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-07 00:51:05.685298 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-07 00:51:05.685303 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-07 00:51:05.685309 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-07 00:51:05.685315 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-07 00:51:05.685321 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-07 00:51:05.685327 | orchestrator | 2026-04-07 00:51:05.685333 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-07 00:51:05.685339 | orchestrator | 2026-04-07 00:51:05.685344 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-07 00:51:05.685350 | orchestrator | Tuesday 07 April 2026 00:49:36 +0000 (0:00:01.352) 0:00:02.067 ********* 2026-04-07 00:51:05.685365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:51:05.685373 | orchestrator | 2026-04-07 00:51:05.685379 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-07 00:51:05.685385 | orchestrator | Tuesday 07 April 2026 00:49:38 +0000 (0:00:01.187) 0:00:03.255 ********* 2026-04-07 00:51:05.685391 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:05.685399 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:05.685405 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:05.685411 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:05.685417 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:05.685422 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:05.685428 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:05.685433 | orchestrator | 2026-04-07 00:51:05.685439 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-07 00:51:05.685445 | orchestrator | Tuesday 07 April 2026 00:49:40 +0000 (0:00:02.819) 0:00:06.075 ********* 2026-04-07 00:51:05.685450 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:05.685456 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:05.685462 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:05.685467 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:05.685473 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:05.685480 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:05.685488 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:05.685494 | orchestrator | 2026-04-07 00:51:05.685500 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-07 00:51:05.685506 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:03.929) 0:00:10.004 ********* 2026-04-07 00:51:05.685513 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:05.685518 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:05.685525 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.685530 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:05.685536 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:05.685542 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:05.685549 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:05.685555 | orchestrator | 2026-04-07 00:51:05.685560 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-07 00:51:05.685568 | orchestrator | Tuesday 07 April 2026 00:49:46 +0000 (0:00:01.924) 0:00:11.929 ********* 2026-04-07 00:51:05.685587 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:05.685593 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:05.685598 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:05.685604 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:05.685610 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:05.685616 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:05.685622 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.685628 | orchestrator | 2026-04-07 00:51:05.685634 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-07 00:51:05.685639 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:11.846) 0:00:23.776 ********* 2026-04-07 00:51:05.685645 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:05.685651 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:05.685657 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:05.685662 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:05.685668 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:05.685674 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:05.685679 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.685686 | orchestrator | 2026-04-07 00:51:05.685692 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-07 00:51:05.685698 | orchestrator | Tuesday 07 April 2026 00:50:36 +0000 (0:00:37.759) 0:01:01.536 ********* 2026-04-07 00:51:05.685706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:51:05.685713 | orchestrator | 2026-04-07 00:51:05.685720 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-07 00:51:05.685726 | orchestrator | Tuesday 07 April 2026 00:50:37 +0000 (0:00:01.485) 0:01:03.021 ********* 2026-04-07 00:51:05.685731 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-07 00:51:05.685738 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-07 00:51:05.685744 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-07 00:51:05.685749 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-07 00:51:05.685769 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-07 00:51:05.685775 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-07 00:51:05.685781 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-07 00:51:05.685786 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-07 00:51:05.685792 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-07 00:51:05.685799 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-07 00:51:05.685804 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-07 00:51:05.685810 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-07 00:51:05.685816 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-07 00:51:05.685822 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-07 00:51:05.685828 | orchestrator | 2026-04-07 00:51:05.685834 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-07 00:51:05.685842 | orchestrator | Tuesday 07 April 2026 00:50:43 +0000 (0:00:05.285) 0:01:08.307 ********* 2026-04-07 00:51:05.685848 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:05.685854 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:05.685860 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:05.685866 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:05.685873 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:05.685879 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:05.685884 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:05.685890 | orchestrator | 2026-04-07 00:51:05.685896 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-07 00:51:05.685900 | orchestrator | Tuesday 07 April 2026 00:50:44 +0000 (0:00:01.384) 0:01:09.692 ********* 2026-04-07 00:51:05.685911 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:05.685916 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.685920 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:05.685923 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:05.685927 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:05.685931 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:05.685935 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:05.685939 | orchestrator | 2026-04-07 00:51:05.685943 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-07 00:51:05.685947 | orchestrator | Tuesday 07 April 2026 00:50:45 +0000 (0:00:01.318) 0:01:11.010 ********* 2026-04-07 00:51:05.685951 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:05.686098 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:05.686106 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:05.686112 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:05.686118 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:05.686124 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:05.686130 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:05.686135 | orchestrator | 2026-04-07 00:51:05.686141 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-07 00:51:05.686176 | orchestrator | Tuesday 07 April 2026 00:50:47 +0000 (0:00:01.324) 0:01:12.334 ********* 2026-04-07 00:51:05.686183 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:05.686189 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:05.686195 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:05.686201 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:05.686207 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:05.686213 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:05.686219 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:05.686224 | orchestrator | 2026-04-07 00:51:05.686230 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-07 00:51:05.686286 | orchestrator | Tuesday 07 April 2026 00:50:49 +0000 (0:00:01.902) 0:01:14.237 ********* 2026-04-07 00:51:05.686297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-07 00:51:05.686307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:51:05.686314 | orchestrator | 2026-04-07 00:51:05.686320 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-07 00:51:05.686326 | orchestrator | Tuesday 07 April 2026 00:50:50 +0000 (0:00:01.447) 0:01:15.684 ********* 2026-04-07 00:51:05.686333 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.686340 | orchestrator | 2026-04-07 00:51:05.686346 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-07 00:51:05.686351 | orchestrator | Tuesday 07 April 2026 00:50:52 +0000 (0:00:01.751) 0:01:17.436 ********* 2026-04-07 00:51:05.686357 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:05.686363 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:05.686369 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:05.686374 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:05.686380 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:05.686386 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:05.686392 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:05.686398 | orchestrator | 2026-04-07 00:51:05.686403 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:51:05.686409 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686418 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686434 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686441 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686458 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686465 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686477 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:51:05.686483 | orchestrator | 2026-04-07 00:51:05.686489 | orchestrator | 2026-04-07 00:51:05.686495 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:51:05.686501 | orchestrator | Tuesday 07 April 2026 00:51:03 +0000 (0:00:11.014) 0:01:28.450 ********* 2026-04-07 00:51:05.686507 | orchestrator | =============================================================================== 2026-04-07 00:51:05.686513 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.76s 2026-04-07 00:51:05.686518 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.85s 2026-04-07 00:51:05.686524 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.01s 2026-04-07 00:51:05.686529 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.29s 2026-04-07 00:51:05.686535 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.93s 2026-04-07 00:51:05.686541 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.82s 2026-04-07 00:51:05.686547 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.93s 2026-04-07 00:51:05.686553 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.90s 2026-04-07 00:51:05.686559 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.75s 2026-04-07 00:51:05.686565 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.49s 2026-04-07 00:51:05.686571 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2026-04-07 00:51:05.686577 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.38s 2026-04-07 00:51:05.686584 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-04-07 00:51:05.686590 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.32s 2026-04-07 00:51:05.686596 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2026-04-07 00:51:05.686601 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.19s 2026-04-07 00:51:05.686608 | orchestrator | 2026-04-07 00:51:05 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:05.687781 | orchestrator | 2026-04-07 00:51:05 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:05.687888 | orchestrator | 2026-04-07 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:08.720727 | orchestrator | 2026-04-07 00:51:08 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:08.721231 | orchestrator | 2026-04-07 00:51:08 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:08.723618 | orchestrator | 2026-04-07 00:51:08 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:08.723668 | orchestrator | 2026-04-07 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:11.787058 | orchestrator | 2026-04-07 00:51:11 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:11.789345 | orchestrator | 2026-04-07 00:51:11 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:11.791016 | orchestrator | 2026-04-07 00:51:11 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:11.791140 | orchestrator | 2026-04-07 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:14.857333 | orchestrator | 2026-04-07 00:51:14 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:14.858142 | orchestrator | 2026-04-07 00:51:14 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:14.859384 | orchestrator | 2026-04-07 00:51:14 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:14.860348 | orchestrator | 2026-04-07 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:17.932845 | orchestrator | 2026-04-07 00:51:17 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:17.933488 | orchestrator | 2026-04-07 00:51:17 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:17.934888 | orchestrator | 2026-04-07 00:51:17 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:17.935026 | orchestrator | 2026-04-07 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:20.982531 | orchestrator | 2026-04-07 00:51:20 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:20.985155 | orchestrator | 2026-04-07 00:51:20 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:20.986784 | orchestrator | 2026-04-07 00:51:20 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:20.988277 | orchestrator | 2026-04-07 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:24.042244 | orchestrator | 2026-04-07 00:51:24 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:24.043525 | orchestrator | 2026-04-07 00:51:24 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:24.045377 | orchestrator | 2026-04-07 00:51:24 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:24.045446 | orchestrator | 2026-04-07 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:27.088524 | orchestrator | 2026-04-07 00:51:27 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:27.088569 | orchestrator | 2026-04-07 00:51:27 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:27.089460 | orchestrator | 2026-04-07 00:51:27 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:27.091228 | orchestrator | 2026-04-07 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:30.132676 | orchestrator | 2026-04-07 00:51:30 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:30.136234 | orchestrator | 2026-04-07 00:51:30 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:30.138444 | orchestrator | 2026-04-07 00:51:30 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:30.138487 | orchestrator | 2026-04-07 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:33.236093 | orchestrator | 2026-04-07 00:51:33 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:33.236170 | orchestrator | 2026-04-07 00:51:33 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:33.238872 | orchestrator | 2026-04-07 00:51:33 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:33.239270 | orchestrator | 2026-04-07 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:36.269272 | orchestrator | 2026-04-07 00:51:36 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:36.270611 | orchestrator | 2026-04-07 00:51:36 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:36.272306 | orchestrator | 2026-04-07 00:51:36 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:36.272390 | orchestrator | 2026-04-07 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:39.319093 | orchestrator | 2026-04-07 00:51:39 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:39.320845 | orchestrator | 2026-04-07 00:51:39 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:39.322469 | orchestrator | 2026-04-07 00:51:39 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:39.322504 | orchestrator | 2026-04-07 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:42.354968 | orchestrator | 2026-04-07 00:51:42 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:42.356130 | orchestrator | 2026-04-07 00:51:42 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:42.357280 | orchestrator | 2026-04-07 00:51:42 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:42.357313 | orchestrator | 2026-04-07 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:45.386541 | orchestrator | 2026-04-07 00:51:45 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:45.388030 | orchestrator | 2026-04-07 00:51:45 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:45.389799 | orchestrator | 2026-04-07 00:51:45 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:45.389828 | orchestrator | 2026-04-07 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:48.421498 | orchestrator | 2026-04-07 00:51:48 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:48.422786 | orchestrator | 2026-04-07 00:51:48 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:48.424446 | orchestrator | 2026-04-07 00:51:48 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:48.424471 | orchestrator | 2026-04-07 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:51.464066 | orchestrator | 2026-04-07 00:51:51 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:51.464149 | orchestrator | 2026-04-07 00:51:51 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state STARTED 2026-04-07 00:51:51.466747 | orchestrator | 2026-04-07 00:51:51 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:51.466825 | orchestrator | 2026-04-07 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:54.489096 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:51:54.490941 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task eb5810ce-27b9-4de4-b15a-1fd9a6d2b167 is in state STARTED 2026-04-07 00:51:54.492639 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:51:54.494124 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:54.501746 | orchestrator | 2026-04-07 00:51:54.501810 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task 5244d43f-2838-4cdc-9e21-f6bbf9f77685 is in state SUCCESS 2026-04-07 00:51:54.503104 | orchestrator | 2026-04-07 00:51:54.503142 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-07 00:51:54.503147 | orchestrator | 2026-04-07 00:51:54.503152 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 00:51:54.503157 | orchestrator | Tuesday 07 April 2026 00:49:29 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-04-07 00:51:54.503162 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:51:54.503167 | orchestrator | 2026-04-07 00:51:54.503171 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-07 00:51:54.503175 | orchestrator | Tuesday 07 April 2026 00:49:31 +0000 (0:00:01.219) 0:00:01.506 ********* 2026-04-07 00:51:54.503179 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503183 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503187 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503191 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503195 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503234 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503239 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503243 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503248 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503252 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503255 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503259 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 00:51:54.503263 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503267 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503271 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503292 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503296 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 00:51:54.503300 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503304 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503308 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503313 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 00:51:54.503316 | orchestrator | 2026-04-07 00:51:54.503320 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 00:51:54.503324 | orchestrator | Tuesday 07 April 2026 00:49:34 +0000 (0:00:03.777) 0:00:05.284 ********* 2026-04-07 00:51:54.503340 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:51:54.503371 | orchestrator | 2026-04-07 00:51:54.503374 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-07 00:51:54.503381 | orchestrator | Tuesday 07 April 2026 00:49:36 +0000 (0:00:01.329) 0:00:06.614 ********* 2026-04-07 00:51:54.503389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503477 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503520 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.503528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503573 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.503585 | orchestrator | 2026-04-07 00:51:54.503592 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-07 00:51:54.503598 | orchestrator | Tuesday 07 April 2026 00:49:42 +0000 (0:00:06.027) 0:00:12.641 ********* 2026-04-07 00:51:54.503604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503660 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:51:54.503667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503712 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:51:54.503719 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:51:54.503733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503768 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:51:54.503773 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:51:54.503777 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:51:54.503782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503799 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:51:54.503804 | orchestrator | 2026-04-07 00:51:54.503808 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-07 00:51:54.503813 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:02.109) 0:00:14.751 ********* 2026-04-07 00:51:54.503817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503835 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:51:54.503839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.503883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.503894 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:51:54.503899 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:51:54.503903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.504186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.504207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504211 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:51:54.504215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504219 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:51:54.504223 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:51:54.504227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 00:51:54.504233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504241 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:51:54.504245 | orchestrator | 2026-04-07 00:51:54.504249 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-07 00:51:54.504253 | orchestrator | Tuesday 07 April 2026 00:49:48 +0000 (0:00:03.747) 0:00:18.499 ********* 2026-04-07 00:51:54.504257 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:51:54.504261 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:51:54.504265 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:51:54.504271 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:51:54.504275 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:51:54.504281 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:51:54.504285 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:51:54.504289 | orchestrator | 2026-04-07 00:51:54.504293 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-07 00:51:54.504297 | orchestrator | Tuesday 07 April 2026 00:49:49 +0000 (0:00:01.554) 0:00:20.053 ********* 2026-04-07 00:51:54.504301 | orchestrator | skipping: [testbed-manager] 2026-04-07 00:51:54.504304 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:51:54.504308 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:51:54.504312 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:51:54.504315 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:51:54.504319 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:51:54.504323 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:51:54.504327 | orchestrator | 2026-04-07 00:51:54.504330 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-07 00:51:54.504334 | orchestrator | Tuesday 07 April 2026 00:49:50 +0000 (0:00:01.424) 0:00:21.477 ********* 2026-04-07 00:51:54.504338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504342 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504377 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504459 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504475 | orchestrator | 2026-04-07 00:51:54.504479 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-07 00:51:54.504483 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:07.186) 0:00:28.664 ********* 2026-04-07 00:51:54.504487 | orchestrator | [WARNING]: Skipped 2026-04-07 00:51:54.504492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-07 00:51:54.504497 | orchestrator | to this access issue: 2026-04-07 00:51:54.504501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-07 00:51:54.504505 | orchestrator | directory 2026-04-07 00:51:54.504509 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:51:54.504513 | orchestrator | 2026-04-07 00:51:54.504516 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-07 00:51:54.504520 | orchestrator | Tuesday 07 April 2026 00:49:59 +0000 (0:00:01.374) 0:00:30.038 ********* 2026-04-07 00:51:54.504524 | orchestrator | [WARNING]: Skipped 2026-04-07 00:51:54.504528 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-07 00:51:54.504533 | orchestrator | to this access issue: 2026-04-07 00:51:54.504537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-07 00:51:54.504541 | orchestrator | directory 2026-04-07 00:51:54.504545 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:51:54.504549 | orchestrator | 2026-04-07 00:51:54.504552 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-07 00:51:54.504556 | orchestrator | Tuesday 07 April 2026 00:50:00 +0000 (0:00:00.773) 0:00:30.811 ********* 2026-04-07 00:51:54.504560 | orchestrator | [WARNING]: Skipped 2026-04-07 00:51:54.504564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-07 00:51:54.504568 | orchestrator | to this access issue: 2026-04-07 00:51:54.504571 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-07 00:51:54.504575 | orchestrator | directory 2026-04-07 00:51:54.504579 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:51:54.504583 | orchestrator | 2026-04-07 00:51:54.504589 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-07 00:51:54.504595 | orchestrator | Tuesday 07 April 2026 00:50:01 +0000 (0:00:01.273) 0:00:32.085 ********* 2026-04-07 00:51:54.504601 | orchestrator | [WARNING]: Skipped 2026-04-07 00:51:54.504607 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-07 00:51:54.504614 | orchestrator | to this access issue: 2026-04-07 00:51:54.504621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-07 00:51:54.504627 | orchestrator | directory 2026-04-07 00:51:54.504633 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 00:51:54.504638 | orchestrator | 2026-04-07 00:51:54.504644 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-07 00:51:54.504650 | orchestrator | Tuesday 07 April 2026 00:50:02 +0000 (0:00:01.046) 0:00:33.132 ********* 2026-04-07 00:51:54.504655 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.504661 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.504667 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.504672 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.504678 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.504684 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.504689 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.504695 | orchestrator | 2026-04-07 00:51:54.504705 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-07 00:51:54.504711 | orchestrator | Tuesday 07 April 2026 00:50:08 +0000 (0:00:05.443) 0:00:38.575 ********* 2026-04-07 00:51:54.504718 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504724 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504735 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504743 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504749 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504755 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504760 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 00:51:54.504766 | orchestrator | 2026-04-07 00:51:54.504771 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-07 00:51:54.504777 | orchestrator | Tuesday 07 April 2026 00:50:12 +0000 (0:00:04.254) 0:00:42.829 ********* 2026-04-07 00:51:54.504783 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.504788 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.504794 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.504799 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.504804 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.504810 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.504816 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.504822 | orchestrator | 2026-04-07 00:51:54.504832 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-07 00:51:54.504839 | orchestrator | Tuesday 07 April 2026 00:50:16 +0000 (0:00:03.887) 0:00:46.717 ********* 2026-04-07 00:51:54.504845 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504857 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504875 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504880 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504890 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504905 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.504912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.504916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504988 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.504996 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.505009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:51:54.505020 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505024 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505028 | orchestrator | 2026-04-07 00:51:54.505032 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-07 00:51:54.505035 | orchestrator | Tuesday 07 April 2026 00:50:20 +0000 (0:00:04.393) 0:00:51.110 ********* 2026-04-07 00:51:54.505039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505047 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505054 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505058 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505062 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 00:51:54.505066 | orchestrator | 2026-04-07 00:51:54.505069 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-07 00:51:54.505073 | orchestrator | Tuesday 07 April 2026 00:50:23 +0000 (0:00:02.864) 0:00:53.974 ********* 2026-04-07 00:51:54.505077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505085 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505091 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505094 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505098 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505102 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 00:51:54.505106 | orchestrator | 2026-04-07 00:51:54.505109 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-07 00:51:54.505113 | orchestrator | Tuesday 07 April 2026 00:50:26 +0000 (0:00:02.874) 0:00:56.849 ********* 2026-04-07 00:51:54.505117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505127 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505143 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505173 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 00:51:54.505185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:51:54.505226 | orchestrator | 2026-04-07 00:51:54.505230 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-07 00:51:54.505233 | orchestrator | Tuesday 07 April 2026 00:50:29 +0000 (0:00:03.426) 0:01:00.275 ********* 2026-04-07 00:51:54.505237 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.505241 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.505245 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.505248 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.505252 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.505256 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.505260 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.505263 | orchestrator | 2026-04-07 00:51:54.505267 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-07 00:51:54.505271 | orchestrator | Tuesday 07 April 2026 00:50:31 +0000 (0:00:01.648) 0:01:01.923 ********* 2026-04-07 00:51:54.505275 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.505278 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.505282 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.505288 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.505292 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.505296 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.505299 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.505303 | orchestrator | 2026-04-07 00:51:54.505307 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505313 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:01.928) 0:01:03.851 ********* 2026-04-07 00:51:54.505317 | orchestrator | 2026-04-07 00:51:54.505321 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505325 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:00.127) 0:01:03.979 ********* 2026-04-07 00:51:54.505328 | orchestrator | 2026-04-07 00:51:54.505332 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505336 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:00.084) 0:01:04.064 ********* 2026-04-07 00:51:54.505340 | orchestrator | 2026-04-07 00:51:54.505343 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505347 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:00.112) 0:01:04.176 ********* 2026-04-07 00:51:54.505351 | orchestrator | 2026-04-07 00:51:54.505354 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505358 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:00.142) 0:01:04.319 ********* 2026-04-07 00:51:54.505362 | orchestrator | 2026-04-07 00:51:54.505366 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505369 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:00.091) 0:01:04.411 ********* 2026-04-07 00:51:54.505373 | orchestrator | 2026-04-07 00:51:54.505377 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 00:51:54.505381 | orchestrator | Tuesday 07 April 2026 00:50:34 +0000 (0:00:00.086) 0:01:04.498 ********* 2026-04-07 00:51:54.505384 | orchestrator | 2026-04-07 00:51:54.505388 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-07 00:51:54.505394 | orchestrator | Tuesday 07 April 2026 00:50:34 +0000 (0:00:00.137) 0:01:04.635 ********* 2026-04-07 00:51:54.505398 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.505426 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.505432 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.505438 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.505444 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.505449 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.505454 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.505459 | orchestrator | 2026-04-07 00:51:54.505469 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-07 00:51:54.505477 | orchestrator | Tuesday 07 April 2026 00:51:06 +0000 (0:00:32.491) 0:01:37.126 ********* 2026-04-07 00:51:54.505483 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.505489 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.505494 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.505501 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.505507 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.505512 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.505517 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.505523 | orchestrator | 2026-04-07 00:51:54.505530 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-07 00:51:54.505536 | orchestrator | Tuesday 07 April 2026 00:51:41 +0000 (0:00:34.734) 0:02:11.861 ********* 2026-04-07 00:51:54.505542 | orchestrator | ok: [testbed-manager] 2026-04-07 00:51:54.505549 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:51:54.505555 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:51:54.505562 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:51:54.505568 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:51:54.505573 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:51:54.505579 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:51:54.505595 | orchestrator | 2026-04-07 00:51:54.505601 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-07 00:51:54.505607 | orchestrator | Tuesday 07 April 2026 00:51:43 +0000 (0:00:01.833) 0:02:13.694 ********* 2026-04-07 00:51:54.505614 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:51:54.505620 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:51:54.505626 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:51:54.505631 | orchestrator | changed: [testbed-manager] 2026-04-07 00:51:54.505638 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:51:54.505644 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:51:54.505651 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:51:54.505657 | orchestrator | 2026-04-07 00:51:54.505664 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:51:54.505672 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505679 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505686 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505693 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505697 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505702 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505706 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 00:51:54.505711 | orchestrator | 2026-04-07 00:51:54.505715 | orchestrator | 2026-04-07 00:51:54.505720 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:51:54.505724 | orchestrator | Tuesday 07 April 2026 00:51:51 +0000 (0:00:08.657) 0:02:22.352 ********* 2026-04-07 00:51:54.505729 | orchestrator | =============================================================================== 2026-04-07 00:51:54.505733 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.73s 2026-04-07 00:51:54.505738 | orchestrator | common : Restart fluentd container ------------------------------------- 32.49s 2026-04-07 00:51:54.505742 | orchestrator | common : Restart cron container ----------------------------------------- 8.66s 2026-04-07 00:51:54.505746 | orchestrator | common : Copying over config.json files for services -------------------- 7.19s 2026-04-07 00:51:54.505751 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.03s 2026-04-07 00:51:54.505755 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.44s 2026-04-07 00:51:54.505759 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.39s 2026-04-07 00:51:54.505763 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.25s 2026-04-07 00:51:54.505768 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.89s 2026-04-07 00:51:54.505772 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.78s 2026-04-07 00:51:54.505777 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.75s 2026-04-07 00:51:54.505781 | orchestrator | common : Check common containers ---------------------------------------- 3.43s 2026-04-07 00:51:54.505785 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.87s 2026-04-07 00:51:54.505790 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.86s 2026-04-07 00:51:54.505803 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.11s 2026-04-07 00:51:54.505807 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.93s 2026-04-07 00:51:54.505812 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.83s 2026-04-07 00:51:54.505816 | orchestrator | common : Creating log volume -------------------------------------------- 1.65s 2026-04-07 00:51:54.505821 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.55s 2026-04-07 00:51:54.505825 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.42s 2026-04-07 00:51:54.505829 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:54.507914 | orchestrator | 2026-04-07 00:51:54 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:51:54.507952 | orchestrator | 2026-04-07 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:51:57.531330 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:51:57.531815 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task eb5810ce-27b9-4de4-b15a-1fd9a6d2b167 is in state STARTED 2026-04-07 00:51:57.532655 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:51:57.533536 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:51:57.534271 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:51:57.534846 | orchestrator | 2026-04-07 00:51:57 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:51:57.534920 | orchestrator | 2026-04-07 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:00.559939 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:00.560000 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task eb5810ce-27b9-4de4-b15a-1fd9a6d2b167 is in state STARTED 2026-04-07 00:52:00.560776 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:00.561620 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:00.562314 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:00.564903 | orchestrator | 2026-04-07 00:52:00 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:00.564958 | orchestrator | 2026-04-07 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:03.588192 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:03.588527 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task eb5810ce-27b9-4de4-b15a-1fd9a6d2b167 is in state STARTED 2026-04-07 00:52:03.589957 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:03.591222 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:03.592544 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:03.593718 | orchestrator | 2026-04-07 00:52:03 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:03.593952 | orchestrator | 2026-04-07 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:06.617827 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:06.617922 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task eb5810ce-27b9-4de4-b15a-1fd9a6d2b167 is in state SUCCESS 2026-04-07 00:52:06.618654 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:06.619215 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:06.621133 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:06.621880 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:06.622371 | orchestrator | 2026-04-07 00:52:06 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:06.622749 | orchestrator | 2026-04-07 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:09.648010 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:09.648154 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:09.649001 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:09.649884 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:09.650598 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:09.651403 | orchestrator | 2026-04-07 00:52:09 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:09.651428 | orchestrator | 2026-04-07 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:12.691124 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:12.692499 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:12.693604 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:12.695381 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:12.696492 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:12.697590 | orchestrator | 2026-04-07 00:52:12 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:12.697617 | orchestrator | 2026-04-07 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:15.763098 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:15.768662 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:15.770479 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:15.771875 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:15.774164 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:15.776410 | orchestrator | 2026-04-07 00:52:15 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:15.776510 | orchestrator | 2026-04-07 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:18.826452 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:18.826565 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:18.826572 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:18.826592 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:18.827116 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:18.833132 | orchestrator | 2026-04-07 00:52:18 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:18.833226 | orchestrator | 2026-04-07 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:21.879038 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:21.885182 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:21.893355 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:21.897902 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:21.899233 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:21.900457 | orchestrator | 2026-04-07 00:52:21 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:21.900516 | orchestrator | 2026-04-07 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:25.150422 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:25.155841 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:25.160213 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state STARTED 2026-04-07 00:52:25.175079 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:25.175155 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:25.176051 | orchestrator | 2026-04-07 00:52:25 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:25.176091 | orchestrator | 2026-04-07 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:28.215083 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:28.215994 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:28.216710 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task bf00cdea-620d-4fde-8af3-7431129b188b is in state SUCCESS 2026-04-07 00:52:28.217759 | orchestrator | 2026-04-07 00:52:28.217796 | orchestrator | 2026-04-07 00:52:28.217802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:52:28.217807 | orchestrator | 2026-04-07 00:52:28.217811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:52:28.217815 | orchestrator | Tuesday 07 April 2026 00:51:55 +0000 (0:00:00.338) 0:00:00.338 ********* 2026-04-07 00:52:28.217840 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:52:28.217849 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:52:28.217855 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:52:28.217861 | orchestrator | 2026-04-07 00:52:28.217867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:52:28.217873 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.321) 0:00:00.660 ********* 2026-04-07 00:52:28.217880 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-07 00:52:28.217887 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-07 00:52:28.217894 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-07 00:52:28.217901 | orchestrator | 2026-04-07 00:52:28.217905 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-07 00:52:28.217909 | orchestrator | 2026-04-07 00:52:28.217913 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-07 00:52:28.217917 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.366) 0:00:01.027 ********* 2026-04-07 00:52:28.217921 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:52:28.217926 | orchestrator | 2026-04-07 00:52:28.217930 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-07 00:52:28.217934 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:00.687) 0:00:01.714 ********* 2026-04-07 00:52:28.217937 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-07 00:52:28.217941 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-07 00:52:28.217945 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-07 00:52:28.217949 | orchestrator | 2026-04-07 00:52:28.217953 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-07 00:52:28.217956 | orchestrator | Tuesday 07 April 2026 00:51:58 +0000 (0:00:01.774) 0:00:03.489 ********* 2026-04-07 00:52:28.217960 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-07 00:52:28.217974 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-07 00:52:28.217977 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-07 00:52:28.217981 | orchestrator | 2026-04-07 00:52:28.217987 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-07 00:52:28.217992 | orchestrator | Tuesday 07 April 2026 00:52:00 +0000 (0:00:01.642) 0:00:05.132 ********* 2026-04-07 00:52:28.217998 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:28.218004 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:28.218010 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:28.218060 | orchestrator | 2026-04-07 00:52:28.218065 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-07 00:52:28.218068 | orchestrator | Tuesday 07 April 2026 00:52:02 +0000 (0:00:01.675) 0:00:06.807 ********* 2026-04-07 00:52:28.218072 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:28.218076 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:28.218080 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:28.218084 | orchestrator | 2026-04-07 00:52:28.218088 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:52:28.218092 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218097 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218101 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218105 | orchestrator | 2026-04-07 00:52:28.218109 | orchestrator | 2026-04-07 00:52:28.218112 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:52:28.218116 | orchestrator | Tuesday 07 April 2026 00:52:05 +0000 (0:00:02.926) 0:00:09.734 ********* 2026-04-07 00:52:28.218127 | orchestrator | =============================================================================== 2026-04-07 00:52:28.218133 | orchestrator | memcached : Restart memcached container --------------------------------- 2.93s 2026-04-07 00:52:28.218138 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.77s 2026-04-07 00:52:28.218144 | orchestrator | memcached : Check memcached container ----------------------------------- 1.68s 2026-04-07 00:52:28.218150 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.64s 2026-04-07 00:52:28.218156 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.69s 2026-04-07 00:52:28.218162 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-07 00:52:28.218169 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-07 00:52:28.218175 | orchestrator | 2026-04-07 00:52:28.218181 | orchestrator | 2026-04-07 00:52:28.218187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:52:28.218193 | orchestrator | 2026-04-07 00:52:28.218199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:52:28.218206 | orchestrator | Tuesday 07 April 2026 00:51:55 +0000 (0:00:00.327) 0:00:00.327 ********* 2026-04-07 00:52:28.218211 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:52:28.218217 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:52:28.218224 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:52:28.218229 | orchestrator | 2026-04-07 00:52:28.218235 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:52:28.218252 | orchestrator | Tuesday 07 April 2026 00:51:55 +0000 (0:00:00.263) 0:00:00.590 ********* 2026-04-07 00:52:28.218257 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-07 00:52:28.218261 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-07 00:52:28.218264 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-07 00:52:28.218268 | orchestrator | 2026-04-07 00:52:28.218272 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-07 00:52:28.218275 | orchestrator | 2026-04-07 00:52:28.218279 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-07 00:52:28.218283 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.417) 0:00:01.008 ********* 2026-04-07 00:52:28.218287 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:52:28.218291 | orchestrator | 2026-04-07 00:52:28.218294 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-07 00:52:28.218298 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:00.675) 0:00:01.683 ********* 2026-04-07 00:52:28.218304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218345 | orchestrator | 2026-04-07 00:52:28.218349 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-07 00:52:28.218353 | orchestrator | Tuesday 07 April 2026 00:51:59 +0000 (0:00:02.269) 0:00:03.952 ********* 2026-04-07 00:52:28.218357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218390 | orchestrator | 2026-04-07 00:52:28.218394 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-07 00:52:28.218398 | orchestrator | Tuesday 07 April 2026 00:52:01 +0000 (0:00:02.536) 0:00:06.489 ********* 2026-04-07 00:52:28.218402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218431 | orchestrator | 2026-04-07 00:52:28.218437 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-07 00:52:28.218441 | orchestrator | Tuesday 07 April 2026 00:52:04 +0000 (0:00:02.552) 0:00:09.042 ********* 2026-04-07 00:52:28.218445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 00:52:28.218487 | orchestrator | 2026-04-07 00:52:28.218493 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 00:52:28.218500 | orchestrator | Tuesday 07 April 2026 00:52:05 +0000 (0:00:01.487) 0:00:10.529 ********* 2026-04-07 00:52:28.218531 | orchestrator | 2026-04-07 00:52:28.218538 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 00:52:28.218548 | orchestrator | Tuesday 07 April 2026 00:52:06 +0000 (0:00:00.357) 0:00:10.887 ********* 2026-04-07 00:52:28.218554 | orchestrator | 2026-04-07 00:52:28.218560 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 00:52:28.218566 | orchestrator | Tuesday 07 April 2026 00:52:06 +0000 (0:00:00.099) 0:00:10.986 ********* 2026-04-07 00:52:28.218573 | orchestrator | 2026-04-07 00:52:28.218577 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-07 00:52:28.218580 | orchestrator | Tuesday 07 April 2026 00:52:06 +0000 (0:00:00.116) 0:00:11.102 ********* 2026-04-07 00:52:28.218584 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:28.218588 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:28.218592 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:28.218595 | orchestrator | 2026-04-07 00:52:28.218599 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-07 00:52:28.218611 | orchestrator | Tuesday 07 April 2026 00:52:14 +0000 (0:00:07.913) 0:00:19.016 ********* 2026-04-07 00:52:28.218615 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:28.218619 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:28.218623 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:28.218626 | orchestrator | 2026-04-07 00:52:28.218630 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:52:28.218634 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218638 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218642 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:52:28.218645 | orchestrator | 2026-04-07 00:52:28.218649 | orchestrator | 2026-04-07 00:52:28.218653 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:52:28.218657 | orchestrator | Tuesday 07 April 2026 00:52:23 +0000 (0:00:09.494) 0:00:28.510 ********* 2026-04-07 00:52:28.218660 | orchestrator | =============================================================================== 2026-04-07 00:52:28.218664 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.49s 2026-04-07 00:52:28.218671 | orchestrator | redis : Restart redis container ----------------------------------------- 7.91s 2026-04-07 00:52:28.218674 | orchestrator | redis : Copying over redis config files --------------------------------- 2.55s 2026-04-07 00:52:28.218678 | orchestrator | redis : Copying over default config.json files -------------------------- 2.54s 2026-04-07 00:52:28.218682 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.27s 2026-04-07 00:52:28.218686 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-04-07 00:52:28.218689 | orchestrator | redis : include_tasks --------------------------------------------------- 0.68s 2026-04-07 00:52:28.218693 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.57s 2026-04-07 00:52:28.218697 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-07 00:52:28.218701 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-07 00:52:28.219797 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:28.219830 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:28.219835 | orchestrator | 2026-04-07 00:52:28 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:28.219839 | orchestrator | 2026-04-07 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:31.264598 | orchestrator | 2026-04-07 00:52:31 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:31.264692 | orchestrator | 2026-04-07 00:52:31 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:31.265200 | orchestrator | 2026-04-07 00:52:31 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:31.265786 | orchestrator | 2026-04-07 00:52:31 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:31.266959 | orchestrator | 2026-04-07 00:52:31 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:31.266989 | orchestrator | 2026-04-07 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:34.296144 | orchestrator | 2026-04-07 00:52:34 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:34.296248 | orchestrator | 2026-04-07 00:52:34 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:34.296258 | orchestrator | 2026-04-07 00:52:34 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:34.299752 | orchestrator | 2026-04-07 00:52:34 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:34.301942 | orchestrator | 2026-04-07 00:52:34 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:34.302006 | orchestrator | 2026-04-07 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:37.390476 | orchestrator | 2026-04-07 00:52:37 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:37.390620 | orchestrator | 2026-04-07 00:52:37 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:37.390634 | orchestrator | 2026-04-07 00:52:37 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:37.390641 | orchestrator | 2026-04-07 00:52:37 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:37.390646 | orchestrator | 2026-04-07 00:52:37 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:37.390653 | orchestrator | 2026-04-07 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:40.417527 | orchestrator | 2026-04-07 00:52:40 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:40.418403 | orchestrator | 2026-04-07 00:52:40 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:40.422187 | orchestrator | 2026-04-07 00:52:40 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:40.422976 | orchestrator | 2026-04-07 00:52:40 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:40.424907 | orchestrator | 2026-04-07 00:52:40 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:40.424961 | orchestrator | 2026-04-07 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:43.462260 | orchestrator | 2026-04-07 00:52:43 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:43.463248 | orchestrator | 2026-04-07 00:52:43 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:43.463545 | orchestrator | 2026-04-07 00:52:43 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:43.464632 | orchestrator | 2026-04-07 00:52:43 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:43.465293 | orchestrator | 2026-04-07 00:52:43 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:43.465328 | orchestrator | 2026-04-07 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:46.505855 | orchestrator | 2026-04-07 00:52:46 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:46.510081 | orchestrator | 2026-04-07 00:52:46 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:46.510227 | orchestrator | 2026-04-07 00:52:46 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:46.510238 | orchestrator | 2026-04-07 00:52:46 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:46.510249 | orchestrator | 2026-04-07 00:52:46 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:46.510274 | orchestrator | 2026-04-07 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:49.544893 | orchestrator | 2026-04-07 00:52:49 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:49.546613 | orchestrator | 2026-04-07 00:52:49 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:49.547114 | orchestrator | 2026-04-07 00:52:49 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:49.548078 | orchestrator | 2026-04-07 00:52:49 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:49.549239 | orchestrator | 2026-04-07 00:52:49 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:49.549281 | orchestrator | 2026-04-07 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:52.654294 | orchestrator | 2026-04-07 00:52:52 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:52.654379 | orchestrator | 2026-04-07 00:52:52 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:52.654389 | orchestrator | 2026-04-07 00:52:52 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:52.654396 | orchestrator | 2026-04-07 00:52:52 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:52.654403 | orchestrator | 2026-04-07 00:52:52 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state STARTED 2026-04-07 00:52:52.654411 | orchestrator | 2026-04-07 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:55.632358 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:55.634667 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:55.635120 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:52:55.636074 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:55.636735 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:55.637827 | orchestrator | 2026-04-07 00:52:55 | INFO  | Task 0b01e93b-a1e5-4c86-be76-619ec65f5d8d is in state SUCCESS 2026-04-07 00:52:55.641127 | orchestrator | 2026-04-07 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:52:55.642286 | orchestrator | 2026-04-07 00:52:55.642337 | orchestrator | 2026-04-07 00:52:55.642348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:52:55.642356 | orchestrator | 2026-04-07 00:52:55.642364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:52:55.642371 | orchestrator | Tuesday 07 April 2026 00:51:55 +0000 (0:00:00.310) 0:00:00.310 ********* 2026-04-07 00:52:55.642377 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:52:55.642385 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:52:55.642392 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:52:55.642410 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:52:55.642425 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:52:55.642433 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:52:55.642440 | orchestrator | 2026-04-07 00:52:55.642447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:52:55.642454 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.691) 0:00:01.002 ********* 2026-04-07 00:52:55.642462 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642470 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642500 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642508 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642519 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642542 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 00:52:55.642549 | orchestrator | 2026-04-07 00:52:55.642556 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-07 00:52:55.642563 | orchestrator | 2026-04-07 00:52:55.642571 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-07 00:52:55.642579 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:01.146) 0:00:02.148 ********* 2026-04-07 00:52:55.642660 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:52:55.642666 | orchestrator | 2026-04-07 00:52:55.642671 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 00:52:55.642676 | orchestrator | Tuesday 07 April 2026 00:51:58 +0000 (0:00:01.429) 0:00:03.578 ********* 2026-04-07 00:52:55.642681 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-07 00:52:55.642686 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-07 00:52:55.642691 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-07 00:52:55.642696 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-07 00:52:55.642700 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-07 00:52:55.642704 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-07 00:52:55.642709 | orchestrator | 2026-04-07 00:52:55.642713 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 00:52:55.642718 | orchestrator | Tuesday 07 April 2026 00:52:00 +0000 (0:00:01.851) 0:00:05.430 ********* 2026-04-07 00:52:55.642723 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-07 00:52:55.642727 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-07 00:52:55.642732 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-07 00:52:55.642736 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-07 00:52:55.642741 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-07 00:52:55.642746 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-07 00:52:55.642750 | orchestrator | 2026-04-07 00:52:55.642755 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 00:52:55.642760 | orchestrator | Tuesday 07 April 2026 00:52:02 +0000 (0:00:01.603) 0:00:07.033 ********* 2026-04-07 00:52:55.642764 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-07 00:52:55.642769 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:52:55.642774 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-07 00:52:55.642779 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:52:55.642783 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-07 00:52:55.642788 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:52:55.642792 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-07 00:52:55.642797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:52:55.642801 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-07 00:52:55.642806 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:52:55.642810 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-07 00:52:55.642815 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:52:55.642819 | orchestrator | 2026-04-07 00:52:55.642824 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-07 00:52:55.642828 | orchestrator | Tuesday 07 April 2026 00:52:03 +0000 (0:00:00.955) 0:00:07.988 ********* 2026-04-07 00:52:55.642839 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:52:55.642851 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:52:55.642856 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:52:55.642861 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:52:55.642871 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:52:55.642876 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:52:55.642881 | orchestrator | 2026-04-07 00:52:55.642886 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-07 00:52:55.642892 | orchestrator | Tuesday 07 April 2026 00:52:04 +0000 (0:00:00.629) 0:00:08.618 ********* 2026-04-07 00:52:55.642913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.642990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643006 | orchestrator | 2026-04-07 00:52:55.643011 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-07 00:52:55.643016 | orchestrator | Tuesday 07 April 2026 00:52:05 +0000 (0:00:01.335) 0:00:09.953 ********* 2026-04-07 00:52:55.643024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643109 | orchestrator | 2026-04-07 00:52:55.643113 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-07 00:52:55.643118 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:02.699) 0:00:12.653 ********* 2026-04-07 00:52:55.643123 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:52:55.643128 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:52:55.643135 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:52:55.643140 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:52:55.643144 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:52:55.643149 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:52:55.643153 | orchestrator | 2026-04-07 00:52:55.643158 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-07 00:52:55.643163 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:00.638) 0:00:13.291 ********* 2026-04-07 00:52:55.643167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 00:52:55.643256 | orchestrator | 2026-04-07 00:52:55.643260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643265 | orchestrator | Tuesday 07 April 2026 00:52:10 +0000 (0:00:02.071) 0:00:15.363 ********* 2026-04-07 00:52:55.643270 | orchestrator | 2026-04-07 00:52:55.643274 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643279 | orchestrator | Tuesday 07 April 2026 00:52:10 +0000 (0:00:00.138) 0:00:15.502 ********* 2026-04-07 00:52:55.643284 | orchestrator | 2026-04-07 00:52:55.643288 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643296 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.120) 0:00:15.622 ********* 2026-04-07 00:52:55.643301 | orchestrator | 2026-04-07 00:52:55.643306 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643310 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.117) 0:00:15.740 ********* 2026-04-07 00:52:55.643315 | orchestrator | 2026-04-07 00:52:55.643320 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643324 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.201) 0:00:15.941 ********* 2026-04-07 00:52:55.643329 | orchestrator | 2026-04-07 00:52:55.643333 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 00:52:55.643338 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.117) 0:00:16.058 ********* 2026-04-07 00:52:55.643342 | orchestrator | 2026-04-07 00:52:55.643347 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-07 00:52:55.643351 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.119) 0:00:16.177 ********* 2026-04-07 00:52:55.643356 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:55.643360 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:55.643365 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:52:55.643370 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:52:55.643374 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:55.643379 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:52:55.643383 | orchestrator | 2026-04-07 00:52:55.643388 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-07 00:52:55.643392 | orchestrator | Tuesday 07 April 2026 00:52:20 +0000 (0:00:09.101) 0:00:25.279 ********* 2026-04-07 00:52:55.643397 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:52:55.643402 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:52:55.643406 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:52:55.643411 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:52:55.643415 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:52:55.643420 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:52:55.643424 | orchestrator | 2026-04-07 00:52:55.643429 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-07 00:52:55.643433 | orchestrator | Tuesday 07 April 2026 00:52:22 +0000 (0:00:01.584) 0:00:26.863 ********* 2026-04-07 00:52:55.643438 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:55.643443 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:52:55.643447 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:52:55.643452 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:55.643456 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:52:55.643463 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:55.643471 | orchestrator | 2026-04-07 00:52:55.643479 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-07 00:52:55.643490 | orchestrator | Tuesday 07 April 2026 00:52:31 +0000 (0:00:09.644) 0:00:36.508 ********* 2026-04-07 00:52:55.643498 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-07 00:52:55.643505 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-07 00:52:55.643512 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-07 00:52:55.643519 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-07 00:52:55.643526 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-07 00:52:55.643539 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-07 00:52:55.643546 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-07 00:52:55.643558 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-07 00:52:55.643566 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-07 00:52:55.643573 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-07 00:52:55.643599 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-07 00:52:55.643607 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-07 00:52:55.643613 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643620 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643627 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643634 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643641 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643646 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 00:52:55.643650 | orchestrator | 2026-04-07 00:52:55.643655 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-07 00:52:55.643660 | orchestrator | Tuesday 07 April 2026 00:52:40 +0000 (0:00:08.565) 0:00:45.074 ********* 2026-04-07 00:52:55.643664 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-07 00:52:55.643669 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:52:55.643673 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-07 00:52:55.643678 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:52:55.643683 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-07 00:52:55.643687 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:52:55.643692 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-07 00:52:55.643697 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-07 00:52:55.643701 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-07 00:52:55.643706 | orchestrator | 2026-04-07 00:52:55.643711 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-07 00:52:55.643715 | orchestrator | Tuesday 07 April 2026 00:52:43 +0000 (0:00:02.731) 0:00:47.805 ********* 2026-04-07 00:52:55.643720 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-07 00:52:55.643724 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:52:55.643729 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-07 00:52:55.643733 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:52:55.643738 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-07 00:52:55.643744 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:52:55.643751 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-07 00:52:55.643762 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-07 00:52:55.643772 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-07 00:52:55.643779 | orchestrator | 2026-04-07 00:52:55.643786 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-07 00:52:55.643793 | orchestrator | Tuesday 07 April 2026 00:52:46 +0000 (0:00:03.386) 0:00:51.192 ********* 2026-04-07 00:52:55.643800 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:52:55.643808 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:52:55.643815 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:52:55.643827 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:52:55.643834 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:52:55.643841 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:52:55.643849 | orchestrator | 2026-04-07 00:52:55.643856 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:52:55.643864 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 00:52:55.643872 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 00:52:55.643879 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 00:52:55.643887 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 00:52:55.643895 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 00:52:55.643905 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 00:52:55.643910 | orchestrator | 2026-04-07 00:52:55.643915 | orchestrator | 2026-04-07 00:52:55.643919 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:52:55.643924 | orchestrator | Tuesday 07 April 2026 00:52:54 +0000 (0:00:07.570) 0:00:58.763 ********* 2026-04-07 00:52:55.643929 | orchestrator | =============================================================================== 2026-04-07 00:52:55.643934 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.22s 2026-04-07 00:52:55.643938 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.10s 2026-04-07 00:52:55.643943 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.57s 2026-04-07 00:52:55.643952 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.39s 2026-04-07 00:52:55.643957 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.73s 2026-04-07 00:52:55.643961 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.70s 2026-04-07 00:52:55.643966 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.07s 2026-04-07 00:52:55.643971 | orchestrator | module-load : Load modules ---------------------------------------------- 1.85s 2026-04-07 00:52:55.643975 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.60s 2026-04-07 00:52:55.643980 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.58s 2026-04-07 00:52:55.643985 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.43s 2026-04-07 00:52:55.643989 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.34s 2026-04-07 00:52:55.643994 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2026-04-07 00:52:55.643998 | orchestrator | module-load : Drop module persistence ----------------------------------- 0.96s 2026-04-07 00:52:55.644003 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.81s 2026-04-07 00:52:55.644007 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-04-07 00:52:55.644012 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.64s 2026-04-07 00:52:55.644017 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2026-04-07 00:52:58.668426 | orchestrator | 2026-04-07 00:52:58 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:52:58.672737 | orchestrator | 2026-04-07 00:52:58 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:52:58.674518 | orchestrator | 2026-04-07 00:52:58 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:52:58.676987 | orchestrator | 2026-04-07 00:52:58 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:52:58.679006 | orchestrator | 2026-04-07 00:52:58 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:52:58.679056 | orchestrator | 2026-04-07 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:01.725372 | orchestrator | 2026-04-07 00:53:01 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:01.726452 | orchestrator | 2026-04-07 00:53:01 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:01.727590 | orchestrator | 2026-04-07 00:53:01 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:01.728472 | orchestrator | 2026-04-07 00:53:01 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:01.729561 | orchestrator | 2026-04-07 00:53:01 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:01.729712 | orchestrator | 2026-04-07 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:04.760585 | orchestrator | 2026-04-07 00:53:04 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:04.761786 | orchestrator | 2026-04-07 00:53:04 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:04.762771 | orchestrator | 2026-04-07 00:53:04 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:04.764779 | orchestrator | 2026-04-07 00:53:04 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:04.765781 | orchestrator | 2026-04-07 00:53:04 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:04.765823 | orchestrator | 2026-04-07 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:07.797838 | orchestrator | 2026-04-07 00:53:07 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:07.798551 | orchestrator | 2026-04-07 00:53:07 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:07.802578 | orchestrator | 2026-04-07 00:53:07 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:07.804359 | orchestrator | 2026-04-07 00:53:07 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:07.805283 | orchestrator | 2026-04-07 00:53:07 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:07.805317 | orchestrator | 2026-04-07 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:10.847333 | orchestrator | 2026-04-07 00:53:10 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:10.848273 | orchestrator | 2026-04-07 00:53:10 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:10.849435 | orchestrator | 2026-04-07 00:53:10 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:10.850667 | orchestrator | 2026-04-07 00:53:10 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:10.851884 | orchestrator | 2026-04-07 00:53:10 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:10.851915 | orchestrator | 2026-04-07 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:13.887485 | orchestrator | 2026-04-07 00:53:13 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:13.887573 | orchestrator | 2026-04-07 00:53:13 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:13.887581 | orchestrator | 2026-04-07 00:53:13 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:13.889120 | orchestrator | 2026-04-07 00:53:13 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:13.889720 | orchestrator | 2026-04-07 00:53:13 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:13.889744 | orchestrator | 2026-04-07 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:16.924995 | orchestrator | 2026-04-07 00:53:16 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:16.925439 | orchestrator | 2026-04-07 00:53:16 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:16.926460 | orchestrator | 2026-04-07 00:53:16 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:16.927444 | orchestrator | 2026-04-07 00:53:16 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:16.930626 | orchestrator | 2026-04-07 00:53:16 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:16.930716 | orchestrator | 2026-04-07 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:19.966182 | orchestrator | 2026-04-07 00:53:19 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:19.967004 | orchestrator | 2026-04-07 00:53:19 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:19.968035 | orchestrator | 2026-04-07 00:53:19 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:19.969100 | orchestrator | 2026-04-07 00:53:19 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:19.970144 | orchestrator | 2026-04-07 00:53:19 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:19.970172 | orchestrator | 2026-04-07 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:23.013122 | orchestrator | 2026-04-07 00:53:23 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:23.013185 | orchestrator | 2026-04-07 00:53:23 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:23.014504 | orchestrator | 2026-04-07 00:53:23 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:23.016475 | orchestrator | 2026-04-07 00:53:23 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:23.017949 | orchestrator | 2026-04-07 00:53:23 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:23.018253 | orchestrator | 2026-04-07 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:26.065983 | orchestrator | 2026-04-07 00:53:26 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:26.066823 | orchestrator | 2026-04-07 00:53:26 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:26.068065 | orchestrator | 2026-04-07 00:53:26 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:26.069292 | orchestrator | 2026-04-07 00:53:26 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:26.070821 | orchestrator | 2026-04-07 00:53:26 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:26.070858 | orchestrator | 2026-04-07 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:29.131260 | orchestrator | 2026-04-07 00:53:29 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:29.135352 | orchestrator | 2026-04-07 00:53:29 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:29.137129 | orchestrator | 2026-04-07 00:53:29 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:29.137863 | orchestrator | 2026-04-07 00:53:29 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:29.138627 | orchestrator | 2026-04-07 00:53:29 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:29.138660 | orchestrator | 2026-04-07 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:32.174736 | orchestrator | 2026-04-07 00:53:32 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:32.176650 | orchestrator | 2026-04-07 00:53:32 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:32.178827 | orchestrator | 2026-04-07 00:53:32 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:32.181172 | orchestrator | 2026-04-07 00:53:32 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:32.183236 | orchestrator | 2026-04-07 00:53:32 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:32.183288 | orchestrator | 2026-04-07 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:35.221932 | orchestrator | 2026-04-07 00:53:35 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:35.222061 | orchestrator | 2026-04-07 00:53:35 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:35.222149 | orchestrator | 2026-04-07 00:53:35 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:35.222966 | orchestrator | 2026-04-07 00:53:35 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:35.223366 | orchestrator | 2026-04-07 00:53:35 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:35.223381 | orchestrator | 2026-04-07 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:38.254314 | orchestrator | 2026-04-07 00:53:38 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:38.254385 | orchestrator | 2026-04-07 00:53:38 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:38.254553 | orchestrator | 2026-04-07 00:53:38 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:38.255008 | orchestrator | 2026-04-07 00:53:38 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:38.255755 | orchestrator | 2026-04-07 00:53:38 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:38.255779 | orchestrator | 2026-04-07 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:41.348866 | orchestrator | 2026-04-07 00:53:41 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:41.348957 | orchestrator | 2026-04-07 00:53:41 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:41.348966 | orchestrator | 2026-04-07 00:53:41 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:41.349004 | orchestrator | 2026-04-07 00:53:41 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:41.349011 | orchestrator | 2026-04-07 00:53:41 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:41.349017 | orchestrator | 2026-04-07 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:44.373002 | orchestrator | 2026-04-07 00:53:44 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:44.373520 | orchestrator | 2026-04-07 00:53:44 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:44.374075 | orchestrator | 2026-04-07 00:53:44 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:44.374690 | orchestrator | 2026-04-07 00:53:44 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:44.375458 | orchestrator | 2026-04-07 00:53:44 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:44.375484 | orchestrator | 2026-04-07 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:47.657735 | orchestrator | 2026-04-07 00:53:47 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:47.657804 | orchestrator | 2026-04-07 00:53:47 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:47.657810 | orchestrator | 2026-04-07 00:53:47 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:47.657814 | orchestrator | 2026-04-07 00:53:47 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:47.657818 | orchestrator | 2026-04-07 00:53:47 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:47.657822 | orchestrator | 2026-04-07 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:50.630370 | orchestrator | 2026-04-07 00:53:50 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:50.630589 | orchestrator | 2026-04-07 00:53:50 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:50.631356 | orchestrator | 2026-04-07 00:53:50 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:50.632013 | orchestrator | 2026-04-07 00:53:50 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state STARTED 2026-04-07 00:53:50.634549 | orchestrator | 2026-04-07 00:53:50 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:50.634659 | orchestrator | 2026-04-07 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:53.681306 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:53.683405 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:53.685604 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:53.687406 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task acc15307-5964-409d-a638-409b10a50271 is in state SUCCESS 2026-04-07 00:53:53.690629 | orchestrator | 2026-04-07 00:53:53.690672 | orchestrator | 2026-04-07 00:53:53.690679 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-07 00:53:53.690684 | orchestrator | 2026-04-07 00:53:53.690689 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-07 00:53:53.690694 | orchestrator | Tuesday 07 April 2026 00:49:29 +0000 (0:00:00.291) 0:00:00.291 ********* 2026-04-07 00:53:53.690707 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.690713 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.690717 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.690796 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.690801 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.690806 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.690810 | orchestrator | 2026-04-07 00:53:53.690814 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-07 00:53:53.690819 | orchestrator | Tuesday 07 April 2026 00:49:30 +0000 (0:00:00.646) 0:00:00.937 ********* 2026-04-07 00:53:53.690824 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.690829 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.690833 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.690837 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.690841 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.690846 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.690850 | orchestrator | 2026-04-07 00:53:53.690854 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-07 00:53:53.690859 | orchestrator | Tuesday 07 April 2026 00:49:31 +0000 (0:00:00.683) 0:00:01.621 ********* 2026-04-07 00:53:53.690863 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.690867 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.690871 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.690876 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.690880 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.690884 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.690888 | orchestrator | 2026-04-07 00:53:53.690893 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-07 00:53:53.690897 | orchestrator | Tuesday 07 April 2026 00:49:31 +0000 (0:00:00.552) 0:00:02.173 ********* 2026-04-07 00:53:53.690902 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.690906 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.690910 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.690914 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.690919 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.690923 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.690927 | orchestrator | 2026-04-07 00:53:53.690931 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-07 00:53:53.690936 | orchestrator | Tuesday 07 April 2026 00:49:34 +0000 (0:00:02.927) 0:00:05.101 ********* 2026-04-07 00:53:53.690940 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.690944 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.690948 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.690953 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.690957 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.690961 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.690966 | orchestrator | 2026-04-07 00:53:53.690970 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-07 00:53:53.690974 | orchestrator | Tuesday 07 April 2026 00:49:36 +0000 (0:00:01.870) 0:00:06.972 ********* 2026-04-07 00:53:53.690978 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.691003 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.691009 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.691013 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691018 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.691022 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.691026 | orchestrator | 2026-04-07 00:53:53.691030 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-07 00:53:53.691035 | orchestrator | Tuesday 07 April 2026 00:49:39 +0000 (0:00:02.550) 0:00:09.522 ********* 2026-04-07 00:53:53.691039 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691043 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691047 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691056 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691060 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691064 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691068 | orchestrator | 2026-04-07 00:53:53.691073 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-07 00:53:53.691077 | orchestrator | Tuesday 07 April 2026 00:49:40 +0000 (0:00:01.153) 0:00:10.675 ********* 2026-04-07 00:53:53.691081 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691086 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691090 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691094 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691098 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691102 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691107 | orchestrator | 2026-04-07 00:53:53.691111 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-07 00:53:53.691115 | orchestrator | Tuesday 07 April 2026 00:49:40 +0000 (0:00:00.492) 0:00:11.167 ********* 2026-04-07 00:53:53.691120 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691124 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691128 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691133 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691137 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691141 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691145 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691150 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691154 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691158 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691171 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691175 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691180 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691184 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691188 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691192 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 00:53:53.691197 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 00:53:53.691201 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691205 | orchestrator | 2026-04-07 00:53:53.691209 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-07 00:53:53.691214 | orchestrator | Tuesday 07 April 2026 00:49:41 +0000 (0:00:00.940) 0:00:12.108 ********* 2026-04-07 00:53:53.691218 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691222 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691227 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691231 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691235 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691240 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691245 | orchestrator | 2026-04-07 00:53:53.691253 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-07 00:53:53.691261 | orchestrator | Tuesday 07 April 2026 00:49:43 +0000 (0:00:01.839) 0:00:13.948 ********* 2026-04-07 00:53:53.691273 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.691281 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.691288 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.691295 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691302 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691313 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691320 | orchestrator | 2026-04-07 00:53:53.691326 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-07 00:53:53.691333 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:00.928) 0:00:14.876 ********* 2026-04-07 00:53:53.691340 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691347 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.691353 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.691359 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.691366 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.691372 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.691378 | orchestrator | 2026-04-07 00:53:53.691385 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-07 00:53:53.691392 | orchestrator | Tuesday 07 April 2026 00:49:51 +0000 (0:00:07.215) 0:00:22.092 ********* 2026-04-07 00:53:53.691398 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691404 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691410 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691417 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691424 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691431 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691435 | orchestrator | 2026-04-07 00:53:53.691439 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-07 00:53:53.691443 | orchestrator | Tuesday 07 April 2026 00:49:54 +0000 (0:00:02.543) 0:00:24.635 ********* 2026-04-07 00:53:53.691450 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691454 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691458 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691462 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691466 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691470 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691474 | orchestrator | 2026-04-07 00:53:53.691478 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-07 00:53:53.691482 | orchestrator | Tuesday 07 April 2026 00:49:56 +0000 (0:00:01.929) 0:00:26.565 ********* 2026-04-07 00:53:53.691486 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691490 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691494 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691498 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691502 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691506 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691510 | orchestrator | 2026-04-07 00:53:53.691514 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-07 00:53:53.691517 | orchestrator | Tuesday 07 April 2026 00:49:57 +0000 (0:00:01.167) 0:00:27.733 ********* 2026-04-07 00:53:53.691521 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-07 00:53:53.691526 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-07 00:53:53.691530 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691533 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-07 00:53:53.691537 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-07 00:53:53.691541 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691545 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-07 00:53:53.691549 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-07 00:53:53.691553 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691557 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-07 00:53:53.691561 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-07 00:53:53.691564 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691568 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-07 00:53:53.691572 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-07 00:53:53.691579 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691583 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-07 00:53:53.691587 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-07 00:53:53.691591 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691595 | orchestrator | 2026-04-07 00:53:53.691599 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-07 00:53:53.691607 | orchestrator | Tuesday 07 April 2026 00:49:57 +0000 (0:00:00.598) 0:00:28.332 ********* 2026-04-07 00:53:53.691611 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691615 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691619 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691623 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691627 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691631 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691634 | orchestrator | 2026-04-07 00:53:53.691638 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-07 00:53:53.691642 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:00.863) 0:00:29.195 ********* 2026-04-07 00:53:53.691646 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.691650 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.691654 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.691658 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691662 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691666 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691670 | orchestrator | 2026-04-07 00:53:53.691674 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-07 00:53:53.691678 | orchestrator | 2026-04-07 00:53:53.691682 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-07 00:53:53.691685 | orchestrator | Tuesday 07 April 2026 00:50:00 +0000 (0:00:01.385) 0:00:30.581 ********* 2026-04-07 00:53:53.691689 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691693 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691697 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691701 | orchestrator | 2026-04-07 00:53:53.691705 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-07 00:53:53.691709 | orchestrator | Tuesday 07 April 2026 00:50:01 +0000 (0:00:00.839) 0:00:31.421 ********* 2026-04-07 00:53:53.691713 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691717 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691735 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691740 | orchestrator | 2026-04-07 00:53:53.691744 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-07 00:53:53.691748 | orchestrator | Tuesday 07 April 2026 00:50:02 +0000 (0:00:01.504) 0:00:32.925 ********* 2026-04-07 00:53:53.691752 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691756 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691759 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691763 | orchestrator | 2026-04-07 00:53:53.691767 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-07 00:53:53.691771 | orchestrator | Tuesday 07 April 2026 00:50:03 +0000 (0:00:01.039) 0:00:33.965 ********* 2026-04-07 00:53:53.691775 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691779 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691783 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691787 | orchestrator | 2026-04-07 00:53:53.691791 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-07 00:53:53.691794 | orchestrator | Tuesday 07 April 2026 00:50:05 +0000 (0:00:01.482) 0:00:35.447 ********* 2026-04-07 00:53:53.691798 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.691802 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691806 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691810 | orchestrator | 2026-04-07 00:53:53.691814 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-07 00:53:53.691823 | orchestrator | Tuesday 07 April 2026 00:50:05 +0000 (0:00:00.392) 0:00:35.840 ********* 2026-04-07 00:53:53.691827 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691831 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.691834 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.691838 | orchestrator | 2026-04-07 00:53:53.691842 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-07 00:53:53.691846 | orchestrator | Tuesday 07 April 2026 00:50:06 +0000 (0:00:01.238) 0:00:37.079 ********* 2026-04-07 00:53:53.691850 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691854 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.691858 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.691862 | orchestrator | 2026-04-07 00:53:53.691866 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-07 00:53:53.691870 | orchestrator | Tuesday 07 April 2026 00:50:08 +0000 (0:00:01.929) 0:00:39.008 ********* 2026-04-07 00:53:53.691874 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:53:53.691878 | orchestrator | 2026-04-07 00:53:53.691882 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-07 00:53:53.691885 | orchestrator | Tuesday 07 April 2026 00:50:09 +0000 (0:00:01.308) 0:00:40.316 ********* 2026-04-07 00:53:53.691889 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.691893 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.691897 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.691901 | orchestrator | 2026-04-07 00:53:53.691905 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-07 00:53:53.691909 | orchestrator | Tuesday 07 April 2026 00:50:12 +0000 (0:00:02.732) 0:00:43.049 ********* 2026-04-07 00:53:53.691916 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691922 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691928 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691934 | orchestrator | 2026-04-07 00:53:53.691939 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-07 00:53:53.691945 | orchestrator | Tuesday 07 April 2026 00:50:13 +0000 (0:00:00.889) 0:00:43.939 ********* 2026-04-07 00:53:53.691951 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.691958 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691965 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691972 | orchestrator | 2026-04-07 00:53:53.691978 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-07 00:53:53.691986 | orchestrator | Tuesday 07 April 2026 00:50:15 +0000 (0:00:01.550) 0:00:45.489 ********* 2026-04-07 00:53:53.691990 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.691993 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.691997 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692001 | orchestrator | 2026-04-07 00:53:53.692005 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-07 00:53:53.692012 | orchestrator | Tuesday 07 April 2026 00:50:16 +0000 (0:00:01.502) 0:00:46.992 ********* 2026-04-07 00:53:53.692016 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.692020 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.692024 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.692028 | orchestrator | 2026-04-07 00:53:53.692032 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-07 00:53:53.692036 | orchestrator | Tuesday 07 April 2026 00:50:17 +0000 (0:00:00.668) 0:00:47.660 ********* 2026-04-07 00:53:53.692040 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.692044 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.692048 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.692052 | orchestrator | 2026-04-07 00:53:53.692056 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-07 00:53:53.692059 | orchestrator | Tuesday 07 April 2026 00:50:18 +0000 (0:00:00.783) 0:00:48.444 ********* 2026-04-07 00:53:53.692067 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692071 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692074 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692078 | orchestrator | 2026-04-07 00:53:53.692082 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-07 00:53:53.692086 | orchestrator | Tuesday 07 April 2026 00:50:20 +0000 (0:00:02.664) 0:00:51.109 ********* 2026-04-07 00:53:53.692092 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692098 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692108 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692115 | orchestrator | 2026-04-07 00:53:53.692121 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-07 00:53:53.692127 | orchestrator | Tuesday 07 April 2026 00:50:23 +0000 (0:00:02.274) 0:00:53.384 ********* 2026-04-07 00:53:53.692134 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692140 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692147 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692153 | orchestrator | 2026-04-07 00:53:53.692159 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-07 00:53:53.692164 | orchestrator | Tuesday 07 April 2026 00:50:23 +0000 (0:00:00.505) 0:00:53.890 ********* 2026-04-07 00:53:53.692168 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 00:53:53.692172 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 00:53:53.692176 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 00:53:53.692180 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 00:53:53.692184 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 00:53:53.692191 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 00:53:53.692195 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 00:53:53.692199 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 00:53:53.692203 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 00:53:53.692207 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 00:53:53.692213 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 00:53:53.692219 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 00:53:53.692225 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 00:53:53.692231 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 00:53:53.692238 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 00:53:53.692244 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692250 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692261 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692267 | orchestrator | 2026-04-07 00:53:53.692274 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-07 00:53:53.692281 | orchestrator | Tuesday 07 April 2026 00:51:17 +0000 (0:00:54.066) 0:01:47.956 ********* 2026-04-07 00:53:53.692288 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.692295 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.692301 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.692305 | orchestrator | 2026-04-07 00:53:53.692309 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-07 00:53:53.692316 | orchestrator | Tuesday 07 April 2026 00:51:18 +0000 (0:00:00.413) 0:01:48.369 ********* 2026-04-07 00:53:53.692320 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692324 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692328 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692332 | orchestrator | 2026-04-07 00:53:53.692336 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-07 00:53:53.692340 | orchestrator | Tuesday 07 April 2026 00:51:19 +0000 (0:00:01.329) 0:01:49.699 ********* 2026-04-07 00:53:53.692344 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692348 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692352 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692356 | orchestrator | 2026-04-07 00:53:53.692360 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-07 00:53:53.692363 | orchestrator | Tuesday 07 April 2026 00:51:20 +0000 (0:00:01.408) 0:01:51.108 ********* 2026-04-07 00:53:53.692367 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692371 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692375 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692379 | orchestrator | 2026-04-07 00:53:53.692383 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-07 00:53:53.692387 | orchestrator | Tuesday 07 April 2026 00:51:46 +0000 (0:00:25.570) 0:02:16.679 ********* 2026-04-07 00:53:53.692391 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692395 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692398 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692402 | orchestrator | 2026-04-07 00:53:53.692406 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-07 00:53:53.692410 | orchestrator | Tuesday 07 April 2026 00:51:46 +0000 (0:00:00.665) 0:02:17.345 ********* 2026-04-07 00:53:53.692414 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692418 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692422 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692426 | orchestrator | 2026-04-07 00:53:53.692430 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-07 00:53:53.692434 | orchestrator | Tuesday 07 April 2026 00:51:47 +0000 (0:00:00.768) 0:02:18.113 ********* 2026-04-07 00:53:53.692438 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692441 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692445 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692449 | orchestrator | 2026-04-07 00:53:53.692453 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-07 00:53:53.692457 | orchestrator | Tuesday 07 April 2026 00:51:48 +0000 (0:00:00.526) 0:02:18.639 ********* 2026-04-07 00:53:53.692461 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692465 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692469 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692473 | orchestrator | 2026-04-07 00:53:53.692477 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-07 00:53:53.692481 | orchestrator | Tuesday 07 April 2026 00:51:48 +0000 (0:00:00.563) 0:02:19.203 ********* 2026-04-07 00:53:53.692484 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692488 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692492 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692496 | orchestrator | 2026-04-07 00:53:53.692532 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-07 00:53:53.692542 | orchestrator | Tuesday 07 April 2026 00:51:49 +0000 (0:00:00.269) 0:02:19.473 ********* 2026-04-07 00:53:53.692549 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692560 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692566 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692572 | orchestrator | 2026-04-07 00:53:53.692578 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-07 00:53:53.692585 | orchestrator | Tuesday 07 April 2026 00:51:49 +0000 (0:00:00.659) 0:02:20.133 ********* 2026-04-07 00:53:53.692591 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692598 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692605 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692612 | orchestrator | 2026-04-07 00:53:53.692619 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-07 00:53:53.692626 | orchestrator | Tuesday 07 April 2026 00:51:50 +0000 (0:00:00.638) 0:02:20.771 ********* 2026-04-07 00:53:53.692630 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692634 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692638 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692642 | orchestrator | 2026-04-07 00:53:53.692646 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-07 00:53:53.692650 | orchestrator | Tuesday 07 April 2026 00:51:51 +0000 (0:00:00.841) 0:02:21.613 ********* 2026-04-07 00:53:53.692653 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:53:53.692657 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:53:53.692661 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:53:53.692665 | orchestrator | 2026-04-07 00:53:53.692669 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-07 00:53:53.692673 | orchestrator | Tuesday 07 April 2026 00:51:52 +0000 (0:00:00.882) 0:02:22.495 ********* 2026-04-07 00:53:53.692677 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.692681 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.692685 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.692688 | orchestrator | 2026-04-07 00:53:53.692692 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-07 00:53:53.692696 | orchestrator | Tuesday 07 April 2026 00:51:52 +0000 (0:00:00.363) 0:02:22.859 ********* 2026-04-07 00:53:53.692700 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.692704 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.692708 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.692712 | orchestrator | 2026-04-07 00:53:53.692716 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-07 00:53:53.692720 | orchestrator | Tuesday 07 April 2026 00:51:52 +0000 (0:00:00.260) 0:02:23.119 ********* 2026-04-07 00:53:53.692739 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692744 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692748 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692752 | orchestrator | 2026-04-07 00:53:53.692756 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-07 00:53:53.692760 | orchestrator | Tuesday 07 April 2026 00:51:53 +0000 (0:00:00.673) 0:02:23.793 ********* 2026-04-07 00:53:53.692764 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.692772 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.692776 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.692780 | orchestrator | 2026-04-07 00:53:53.692784 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-07 00:53:53.692788 | orchestrator | Tuesday 07 April 2026 00:51:54 +0000 (0:00:00.717) 0:02:24.510 ********* 2026-04-07 00:53:53.692792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 00:53:53.692796 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 00:53:53.692800 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 00:53:53.692811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 00:53:53.692820 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 00:53:53.692829 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 00:53:53.692835 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 00:53:53.692841 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-07 00:53:53.692847 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 00:53:53.692853 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-07 00:53:53.692859 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 00:53:53.692865 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 00:53:53.692871 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 00:53:53.692877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 00:53:53.692883 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 00:53:53.692890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 00:53:53.692896 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 00:53:53.692903 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 00:53:53.692910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 00:53:53.692922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 00:53:53.692927 | orchestrator | 2026-04-07 00:53:53.692931 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-07 00:53:53.692935 | orchestrator | 2026-04-07 00:53:53.692939 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-07 00:53:53.692943 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:02.898) 0:02:27.409 ********* 2026-04-07 00:53:53.692947 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.692951 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.692955 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.692959 | orchestrator | 2026-04-07 00:53:53.692962 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-07 00:53:53.692966 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:00.437) 0:02:27.846 ********* 2026-04-07 00:53:53.692970 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.692974 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.692978 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.692982 | orchestrator | 2026-04-07 00:53:53.692985 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-07 00:53:53.692989 | orchestrator | Tuesday 07 April 2026 00:51:58 +0000 (0:00:00.574) 0:02:28.421 ********* 2026-04-07 00:53:53.692993 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.692997 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.693001 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.693005 | orchestrator | 2026-04-07 00:53:53.693009 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-07 00:53:53.693013 | orchestrator | Tuesday 07 April 2026 00:51:58 +0000 (0:00:00.386) 0:02:28.808 ********* 2026-04-07 00:53:53.693017 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:53:53.693024 | orchestrator | 2026-04-07 00:53:53.693028 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-07 00:53:53.693032 | orchestrator | Tuesday 07 April 2026 00:51:58 +0000 (0:00:00.406) 0:02:29.214 ********* 2026-04-07 00:53:53.693036 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.693040 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.693044 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.693048 | orchestrator | 2026-04-07 00:53:53.693052 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-07 00:53:53.693055 | orchestrator | Tuesday 07 April 2026 00:51:59 +0000 (0:00:00.253) 0:02:29.467 ********* 2026-04-07 00:53:53.693059 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.693063 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.693067 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.693071 | orchestrator | 2026-04-07 00:53:53.693076 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-07 00:53:53.693083 | orchestrator | Tuesday 07 April 2026 00:51:59 +0000 (0:00:00.383) 0:02:29.851 ********* 2026-04-07 00:53:53.693090 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.693101 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.693107 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.693114 | orchestrator | 2026-04-07 00:53:53.693121 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-07 00:53:53.693127 | orchestrator | Tuesday 07 April 2026 00:51:59 +0000 (0:00:00.289) 0:02:30.140 ********* 2026-04-07 00:53:53.693134 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.693141 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.693146 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.693150 | orchestrator | 2026-04-07 00:53:53.693154 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-07 00:53:53.693158 | orchestrator | Tuesday 07 April 2026 00:52:00 +0000 (0:00:00.673) 0:02:30.814 ********* 2026-04-07 00:53:53.693163 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.693167 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.693171 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.693174 | orchestrator | 2026-04-07 00:53:53.693178 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-07 00:53:53.693182 | orchestrator | Tuesday 07 April 2026 00:52:01 +0000 (0:00:01.020) 0:02:31.834 ********* 2026-04-07 00:53:53.693186 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.693190 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.693194 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.693198 | orchestrator | 2026-04-07 00:53:53.693202 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-07 00:53:53.693208 | orchestrator | Tuesday 07 April 2026 00:52:02 +0000 (0:00:01.453) 0:02:33.288 ********* 2026-04-07 00:53:53.693215 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:53:53.693223 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:53:53.693232 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:53:53.693238 | orchestrator | 2026-04-07 00:53:53.693244 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-07 00:53:53.693250 | orchestrator | 2026-04-07 00:53:53.693256 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-07 00:53:53.693264 | orchestrator | Tuesday 07 April 2026 00:52:13 +0000 (0:00:10.127) 0:02:43.415 ********* 2026-04-07 00:53:53.693270 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693278 | orchestrator | 2026-04-07 00:53:53.693286 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-07 00:53:53.693292 | orchestrator | Tuesday 07 April 2026 00:52:13 +0000 (0:00:00.788) 0:02:44.204 ********* 2026-04-07 00:53:53.693299 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693305 | orchestrator | 2026-04-07 00:53:53.693312 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 00:53:53.693324 | orchestrator | Tuesday 07 April 2026 00:52:14 +0000 (0:00:00.482) 0:02:44.686 ********* 2026-04-07 00:53:53.693331 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 00:53:53.693337 | orchestrator | 2026-04-07 00:53:53.693344 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 00:53:53.693352 | orchestrator | Tuesday 07 April 2026 00:52:14 +0000 (0:00:00.624) 0:02:45.310 ********* 2026-04-07 00:53:53.693359 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693365 | orchestrator | 2026-04-07 00:53:53.693374 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-07 00:53:53.693378 | orchestrator | Tuesday 07 April 2026 00:52:15 +0000 (0:00:01.035) 0:02:46.345 ********* 2026-04-07 00:53:53.693382 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693386 | orchestrator | 2026-04-07 00:53:53.693390 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-07 00:53:53.693394 | orchestrator | Tuesday 07 April 2026 00:52:17 +0000 (0:00:01.128) 0:02:47.474 ********* 2026-04-07 00:53:53.693398 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 00:53:53.693402 | orchestrator | 2026-04-07 00:53:53.693406 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-07 00:53:53.693410 | orchestrator | Tuesday 07 April 2026 00:52:18 +0000 (0:00:01.637) 0:02:49.111 ********* 2026-04-07 00:53:53.693414 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 00:53:53.693418 | orchestrator | 2026-04-07 00:53:53.693422 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-07 00:53:53.693425 | orchestrator | Tuesday 07 April 2026 00:52:19 +0000 (0:00:01.043) 0:02:50.154 ********* 2026-04-07 00:53:53.693429 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693433 | orchestrator | 2026-04-07 00:53:53.693437 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-07 00:53:53.693441 | orchestrator | Tuesday 07 April 2026 00:52:20 +0000 (0:00:00.487) 0:02:50.642 ********* 2026-04-07 00:53:53.693445 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693449 | orchestrator | 2026-04-07 00:53:53.693453 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-07 00:53:53.693457 | orchestrator | 2026-04-07 00:53:53.693461 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-07 00:53:53.693465 | orchestrator | Tuesday 07 April 2026 00:52:20 +0000 (0:00:00.631) 0:02:51.273 ********* 2026-04-07 00:53:53.693468 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693472 | orchestrator | 2026-04-07 00:53:53.693476 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-07 00:53:53.693480 | orchestrator | Tuesday 07 April 2026 00:52:21 +0000 (0:00:00.152) 0:02:51.426 ********* 2026-04-07 00:53:53.693484 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:53:53.693488 | orchestrator | 2026-04-07 00:53:53.693492 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-07 00:53:53.693496 | orchestrator | Tuesday 07 April 2026 00:52:21 +0000 (0:00:00.259) 0:02:51.685 ********* 2026-04-07 00:53:53.693500 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693504 | orchestrator | 2026-04-07 00:53:53.693508 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-07 00:53:53.693512 | orchestrator | Tuesday 07 April 2026 00:52:22 +0000 (0:00:01.539) 0:02:53.225 ********* 2026-04-07 00:53:53.693520 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693524 | orchestrator | 2026-04-07 00:53:53.693528 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-07 00:53:53.693532 | orchestrator | Tuesday 07 April 2026 00:52:24 +0000 (0:00:01.738) 0:02:54.964 ********* 2026-04-07 00:53:53.693536 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693540 | orchestrator | 2026-04-07 00:53:53.693544 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-07 00:53:53.693548 | orchestrator | Tuesday 07 April 2026 00:52:25 +0000 (0:00:00.716) 0:02:55.680 ********* 2026-04-07 00:53:53.693554 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693558 | orchestrator | 2026-04-07 00:53:53.693562 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-07 00:53:53.693566 | orchestrator | Tuesday 07 April 2026 00:52:25 +0000 (0:00:00.406) 0:02:56.087 ********* 2026-04-07 00:53:53.693570 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693574 | orchestrator | 2026-04-07 00:53:53.693578 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-07 00:53:53.693582 | orchestrator | Tuesday 07 April 2026 00:52:32 +0000 (0:00:06.624) 0:03:02.711 ********* 2026-04-07 00:53:53.693586 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.693590 | orchestrator | 2026-04-07 00:53:53.693594 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-07 00:53:53.693598 | orchestrator | Tuesday 07 April 2026 00:52:43 +0000 (0:00:11.618) 0:03:14.330 ********* 2026-04-07 00:53:53.693602 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.693606 | orchestrator | 2026-04-07 00:53:53.693610 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-07 00:53:53.693614 | orchestrator | 2026-04-07 00:53:53.693618 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-07 00:53:53.693622 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:00.578) 0:03:14.908 ********* 2026-04-07 00:53:53.693626 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.693630 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.693634 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.693638 | orchestrator | 2026-04-07 00:53:53.693642 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-07 00:53:53.693646 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:00.393) 0:03:15.301 ********* 2026-04-07 00:53:53.693650 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693654 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.693657 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.693661 | orchestrator | 2026-04-07 00:53:53.693665 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-07 00:53:53.693669 | orchestrator | Tuesday 07 April 2026 00:52:45 +0000 (0:00:00.358) 0:03:15.659 ********* 2026-04-07 00:53:53.693673 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:53:53.693677 | orchestrator | 2026-04-07 00:53:53.693681 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-07 00:53:53.693685 | orchestrator | Tuesday 07 April 2026 00:52:45 +0000 (0:00:00.487) 0:03:16.147 ********* 2026-04-07 00:53:53.693689 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693693 | orchestrator | 2026-04-07 00:53:53.693699 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-07 00:53:53.693703 | orchestrator | Tuesday 07 April 2026 00:52:46 +0000 (0:00:00.777) 0:03:16.925 ********* 2026-04-07 00:53:53.693707 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693711 | orchestrator | 2026-04-07 00:53:53.693715 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-07 00:53:53.693719 | orchestrator | Tuesday 07 April 2026 00:52:47 +0000 (0:00:00.847) 0:03:17.773 ********* 2026-04-07 00:53:53.693739 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693743 | orchestrator | 2026-04-07 00:53:53.693747 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-07 00:53:53.693752 | orchestrator | Tuesday 07 April 2026 00:52:47 +0000 (0:00:00.146) 0:03:17.919 ********* 2026-04-07 00:53:53.693756 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693759 | orchestrator | 2026-04-07 00:53:53.693763 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-07 00:53:53.693767 | orchestrator | Tuesday 07 April 2026 00:52:48 +0000 (0:00:01.268) 0:03:19.188 ********* 2026-04-07 00:53:53.693771 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693778 | orchestrator | 2026-04-07 00:53:53.693782 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-07 00:53:53.693786 | orchestrator | Tuesday 07 April 2026 00:52:48 +0000 (0:00:00.095) 0:03:19.283 ********* 2026-04-07 00:53:53.693790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693794 | orchestrator | 2026-04-07 00:53:53.693798 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-07 00:53:53.693802 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:00.087) 0:03:19.370 ********* 2026-04-07 00:53:53.693806 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693809 | orchestrator | 2026-04-07 00:53:53.693813 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-07 00:53:53.693817 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:00.089) 0:03:19.459 ********* 2026-04-07 00:53:53.693821 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693825 | orchestrator | 2026-04-07 00:53:53.693829 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-07 00:53:53.693833 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:00.087) 0:03:19.547 ********* 2026-04-07 00:53:53.693837 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693841 | orchestrator | 2026-04-07 00:53:53.693845 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-07 00:53:53.693849 | orchestrator | Tuesday 07 April 2026 00:52:54 +0000 (0:00:05.091) 0:03:24.638 ********* 2026-04-07 00:53:53.693853 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-07 00:53:53.693857 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-07 00:53:53.693864 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-07 00:53:53.693868 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-07 00:53:53.693872 | orchestrator | 2026-04-07 00:53:53.693876 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-07 00:53:53.693880 | orchestrator | Tuesday 07 April 2026 00:53:24 +0000 (0:00:30.634) 0:03:55.273 ********* 2026-04-07 00:53:53.693884 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693887 | orchestrator | 2026-04-07 00:53:53.693891 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-07 00:53:53.693898 | orchestrator | Tuesday 07 April 2026 00:53:26 +0000 (0:00:01.166) 0:03:56.440 ********* 2026-04-07 00:53:53.693905 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693912 | orchestrator | 2026-04-07 00:53:53.693918 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-07 00:53:53.693925 | orchestrator | Tuesday 07 April 2026 00:53:27 +0000 (0:00:01.883) 0:03:58.323 ********* 2026-04-07 00:53:53.693932 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 00:53:53.693939 | orchestrator | 2026-04-07 00:53:53.693946 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-07 00:53:53.693951 | orchestrator | Tuesday 07 April 2026 00:53:29 +0000 (0:00:01.039) 0:03:59.362 ********* 2026-04-07 00:53:53.693955 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693959 | orchestrator | 2026-04-07 00:53:53.693963 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-07 00:53:53.693967 | orchestrator | Tuesday 07 April 2026 00:53:29 +0000 (0:00:00.098) 0:03:59.461 ********* 2026-04-07 00:53:53.693971 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-07 00:53:53.693975 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-07 00:53:53.693979 | orchestrator | 2026-04-07 00:53:53.693983 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-07 00:53:53.693987 | orchestrator | Tuesday 07 April 2026 00:53:30 +0000 (0:00:01.830) 0:04:01.291 ********* 2026-04-07 00:53:53.693991 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.693998 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.694002 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.694006 | orchestrator | 2026-04-07 00:53:53.694010 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-07 00:53:53.694051 | orchestrator | Tuesday 07 April 2026 00:53:31 +0000 (0:00:00.425) 0:04:01.717 ********* 2026-04-07 00:53:53.694055 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.694060 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.694063 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.694067 | orchestrator | 2026-04-07 00:53:53.694071 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-07 00:53:53.694075 | orchestrator | 2026-04-07 00:53:53.694079 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-07 00:53:53.694085 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.743) 0:04:02.461 ********* 2026-04-07 00:53:53.694092 | orchestrator | ok: [testbed-manager] 2026-04-07 00:53:53.694102 | orchestrator | 2026-04-07 00:53:53.694112 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-07 00:53:53.694118 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.130) 0:04:02.591 ********* 2026-04-07 00:53:53.694125 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 00:53:53.694132 | orchestrator | 2026-04-07 00:53:53.694138 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-07 00:53:53.694144 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.312) 0:04:02.904 ********* 2026-04-07 00:53:53.694150 | orchestrator | changed: [testbed-manager] 2026-04-07 00:53:53.694156 | orchestrator | 2026-04-07 00:53:53.694162 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-07 00:53:53.694168 | orchestrator | 2026-04-07 00:53:53.694174 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-07 00:53:53.694180 | orchestrator | Tuesday 07 April 2026 00:53:37 +0000 (0:00:04.853) 0:04:07.758 ********* 2026-04-07 00:53:53.694186 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:53:53.694192 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:53:53.694199 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:53:53.694205 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:53:53.694212 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:53:53.694219 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:53:53.694225 | orchestrator | 2026-04-07 00:53:53.694232 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-07 00:53:53.694238 | orchestrator | Tuesday 07 April 2026 00:53:38 +0000 (0:00:00.645) 0:04:08.404 ********* 2026-04-07 00:53:53.694244 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 00:53:53.694251 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 00:53:53.694258 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 00:53:53.694265 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 00:53:53.694272 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 00:53:53.694278 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 00:53:53.694284 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 00:53:53.694291 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 00:53:53.694297 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 00:53:53.694308 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 00:53:53.694315 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 00:53:53.694321 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 00:53:53.694333 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 00:53:53.694339 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 00:53:53.694346 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 00:53:53.694352 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 00:53:53.694359 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 00:53:53.694364 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 00:53:53.694371 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 00:53:53.694377 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 00:53:53.694384 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 00:53:53.694391 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 00:53:53.694398 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 00:53:53.694404 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 00:53:53.694411 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 00:53:53.694418 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 00:53:53.694425 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 00:53:53.694432 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 00:53:53.694436 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 00:53:53.694440 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 00:53:53.694444 | orchestrator | 2026-04-07 00:53:53.694450 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-07 00:53:53.694459 | orchestrator | Tuesday 07 April 2026 00:53:50 +0000 (0:00:12.409) 0:04:20.814 ********* 2026-04-07 00:53:53.694468 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.694475 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.694480 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.694487 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.694496 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.694503 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.694509 | orchestrator | 2026-04-07 00:53:53.694516 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-07 00:53:53.694522 | orchestrator | Tuesday 07 April 2026 00:53:50 +0000 (0:00:00.496) 0:04:21.310 ********* 2026-04-07 00:53:53.694528 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:53:53.694534 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:53:53.694540 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:53:53.694546 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:53:53.694552 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:53:53.694558 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:53:53.694565 | orchestrator | 2026-04-07 00:53:53.694571 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:53:53.694577 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:53:53.694585 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-07 00:53:53.694591 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 00:53:53.694602 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 00:53:53.694608 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 00:53:53.694614 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 00:53:53.694621 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 00:53:53.694627 | orchestrator | 2026-04-07 00:53:53.694633 | orchestrator | 2026-04-07 00:53:53.694639 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:53:53.694645 | orchestrator | Tuesday 07 April 2026 00:53:51 +0000 (0:00:00.545) 0:04:21.855 ********* 2026-04-07 00:53:53.694657 | orchestrator | =============================================================================== 2026-04-07 00:53:53.694663 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.07s 2026-04-07 00:53:53.694670 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 30.63s 2026-04-07 00:53:53.694676 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.57s 2026-04-07 00:53:53.694682 | orchestrator | Manage labels ---------------------------------------------------------- 12.41s 2026-04-07 00:53:53.694689 | orchestrator | kubectl : Install required packages ------------------------------------ 11.62s 2026-04-07 00:53:53.694695 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.13s 2026-04-07 00:53:53.694701 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.22s 2026-04-07 00:53:53.694707 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.62s 2026-04-07 00:53:53.694713 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.09s 2026-04-07 00:53:53.694719 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.85s 2026-04-07 00:53:53.694739 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.93s 2026-04-07 00:53:53.694745 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.90s 2026-04-07 00:53:53.694751 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.73s 2026-04-07 00:53:53.694757 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.66s 2026-04-07 00:53:53.694764 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.55s 2026-04-07 00:53:53.694770 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.54s 2026-04-07 00:53:53.694776 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.28s 2026-04-07 00:53:53.694782 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.93s 2026-04-07 00:53:53.694788 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.93s 2026-04-07 00:53:53.694794 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.88s 2026-04-07 00:53:53.694801 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task 9f096cb6-8a81-4641-83a8-bd00da042596 is in state STARTED 2026-04-07 00:53:53.696158 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task 693d1092-b1c2-4257-96be-1a22a89c0566 is in state STARTED 2026-04-07 00:53:53.696851 | orchestrator | 2026-04-07 00:53:53 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:53.696937 | orchestrator | 2026-04-07 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:56.738287 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:56.740602 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:56.740638 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:56.742050 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task 9f096cb6-8a81-4641-83a8-bd00da042596 is in state STARTED 2026-04-07 00:53:56.742924 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task 693d1092-b1c2-4257-96be-1a22a89c0566 is in state STARTED 2026-04-07 00:53:56.744148 | orchestrator | 2026-04-07 00:53:56 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:56.744496 | orchestrator | 2026-04-07 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:53:59.918559 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:53:59.918611 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:53:59.918621 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:53:59.918628 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task 9f096cb6-8a81-4641-83a8-bd00da042596 is in state SUCCESS 2026-04-07 00:53:59.918634 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task 693d1092-b1c2-4257-96be-1a22a89c0566 is in state STARTED 2026-04-07 00:53:59.918641 | orchestrator | 2026-04-07 00:53:59 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:53:59.918658 | orchestrator | 2026-04-07 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:02.930010 | orchestrator | 2026-04-07 00:54:02 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:02.930514 | orchestrator | 2026-04-07 00:54:02 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:02.931356 | orchestrator | 2026-04-07 00:54:02 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:02.931840 | orchestrator | 2026-04-07 00:54:02 | INFO  | Task 693d1092-b1c2-4257-96be-1a22a89c0566 is in state STARTED 2026-04-07 00:54:02.932542 | orchestrator | 2026-04-07 00:54:02 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:02.932601 | orchestrator | 2026-04-07 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:05.959156 | orchestrator | 2026-04-07 00:54:05 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:05.959221 | orchestrator | 2026-04-07 00:54:05 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:05.960574 | orchestrator | 2026-04-07 00:54:05 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:05.961057 | orchestrator | 2026-04-07 00:54:05 | INFO  | Task 693d1092-b1c2-4257-96be-1a22a89c0566 is in state SUCCESS 2026-04-07 00:54:05.961610 | orchestrator | 2026-04-07 00:54:05 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:05.961642 | orchestrator | 2026-04-07 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:08.984188 | orchestrator | 2026-04-07 00:54:08 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:08.984333 | orchestrator | 2026-04-07 00:54:08 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:08.984944 | orchestrator | 2026-04-07 00:54:08 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:08.985727 | orchestrator | 2026-04-07 00:54:08 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:08.985797 | orchestrator | 2026-04-07 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:12.019928 | orchestrator | 2026-04-07 00:54:12 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:12.022401 | orchestrator | 2026-04-07 00:54:12 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:12.023838 | orchestrator | 2026-04-07 00:54:12 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:12.025207 | orchestrator | 2026-04-07 00:54:12 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:12.025265 | orchestrator | 2026-04-07 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:15.076989 | orchestrator | 2026-04-07 00:54:15 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:15.077628 | orchestrator | 2026-04-07 00:54:15 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:15.078745 | orchestrator | 2026-04-07 00:54:15 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:15.079892 | orchestrator | 2026-04-07 00:54:15 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:15.079928 | orchestrator | 2026-04-07 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:18.110615 | orchestrator | 2026-04-07 00:54:18 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:18.111057 | orchestrator | 2026-04-07 00:54:18 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:18.111750 | orchestrator | 2026-04-07 00:54:18 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:18.112439 | orchestrator | 2026-04-07 00:54:18 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:18.112462 | orchestrator | 2026-04-07 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:21.158999 | orchestrator | 2026-04-07 00:54:21 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:21.160816 | orchestrator | 2026-04-07 00:54:21 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:21.163999 | orchestrator | 2026-04-07 00:54:21 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:21.164728 | orchestrator | 2026-04-07 00:54:21 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:21.164825 | orchestrator | 2026-04-07 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:24.198237 | orchestrator | 2026-04-07 00:54:24 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:24.200886 | orchestrator | 2026-04-07 00:54:24 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state STARTED 2026-04-07 00:54:24.202711 | orchestrator | 2026-04-07 00:54:24 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:24.205092 | orchestrator | 2026-04-07 00:54:24 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:24.205261 | orchestrator | 2026-04-07 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:27.239462 | orchestrator | 2026-04-07 00:54:27 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:27.240521 | orchestrator | 2026-04-07 00:54:27.240563 | orchestrator | 2026-04-07 00:54:27.240572 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-07 00:54:27.240580 | orchestrator | 2026-04-07 00:54:27.240588 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 00:54:27.240596 | orchestrator | Tuesday 07 April 2026 00:53:55 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-04-07 00:54:27.240604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 00:54:27.240612 | orchestrator | 2026-04-07 00:54:27.240619 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 00:54:27.240627 | orchestrator | Tuesday 07 April 2026 00:53:56 +0000 (0:00:01.179) 0:00:01.453 ********* 2026-04-07 00:54:27.240668 | orchestrator | changed: [testbed-manager] 2026-04-07 00:54:27.240678 | orchestrator | 2026-04-07 00:54:27.240686 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-07 00:54:27.240693 | orchestrator | Tuesday 07 April 2026 00:53:58 +0000 (0:00:01.845) 0:00:03.298 ********* 2026-04-07 00:54:27.240701 | orchestrator | changed: [testbed-manager] 2026-04-07 00:54:27.240708 | orchestrator | 2026-04-07 00:54:27.240715 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:54:27.240723 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:54:27.240733 | orchestrator | 2026-04-07 00:54:27.240740 | orchestrator | 2026-04-07 00:54:27.240748 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:54:27.240755 | orchestrator | Tuesday 07 April 2026 00:53:59 +0000 (0:00:00.600) 0:00:03.899 ********* 2026-04-07 00:54:27.240762 | orchestrator | =============================================================================== 2026-04-07 00:54:27.240769 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.85s 2026-04-07 00:54:27.240777 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.18s 2026-04-07 00:54:27.240784 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.60s 2026-04-07 00:54:27.240791 | orchestrator | 2026-04-07 00:54:27.240853 | orchestrator | 2026-04-07 00:54:27.240861 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-07 00:54:27.240891 | orchestrator | 2026-04-07 00:54:27.240898 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-07 00:54:27.240906 | orchestrator | Tuesday 07 April 2026 00:53:55 +0000 (0:00:00.279) 0:00:00.279 ********* 2026-04-07 00:54:27.240913 | orchestrator | ok: [testbed-manager] 2026-04-07 00:54:27.240921 | orchestrator | 2026-04-07 00:54:27.240929 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-07 00:54:27.240936 | orchestrator | Tuesday 07 April 2026 00:53:56 +0000 (0:00:00.940) 0:00:01.219 ********* 2026-04-07 00:54:27.240943 | orchestrator | ok: [testbed-manager] 2026-04-07 00:54:27.240950 | orchestrator | 2026-04-07 00:54:27.240958 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 00:54:27.240965 | orchestrator | Tuesday 07 April 2026 00:53:57 +0000 (0:00:00.700) 0:00:01.920 ********* 2026-04-07 00:54:27.240972 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 00:54:27.240979 | orchestrator | 2026-04-07 00:54:27.240987 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 00:54:27.241004 | orchestrator | Tuesday 07 April 2026 00:53:58 +0000 (0:00:01.139) 0:00:03.060 ********* 2026-04-07 00:54:27.241012 | orchestrator | changed: [testbed-manager] 2026-04-07 00:54:27.241019 | orchestrator | 2026-04-07 00:54:27.241037 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-07 00:54:27.241045 | orchestrator | Tuesday 07 April 2026 00:53:59 +0000 (0:00:01.456) 0:00:04.516 ********* 2026-04-07 00:54:27.241052 | orchestrator | changed: [testbed-manager] 2026-04-07 00:54:27.241059 | orchestrator | 2026-04-07 00:54:27.241066 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-07 00:54:27.241098 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:00.789) 0:00:05.305 ********* 2026-04-07 00:54:27.241106 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 00:54:27.241124 | orchestrator | 2026-04-07 00:54:27.241132 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-07 00:54:27.241139 | orchestrator | Tuesday 07 April 2026 00:54:02 +0000 (0:00:02.452) 0:00:07.758 ********* 2026-04-07 00:54:27.241146 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 00:54:27.241153 | orchestrator | 2026-04-07 00:54:27.241172 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-07 00:54:27.241180 | orchestrator | Tuesday 07 April 2026 00:54:03 +0000 (0:00:00.869) 0:00:08.627 ********* 2026-04-07 00:54:27.241187 | orchestrator | ok: [testbed-manager] 2026-04-07 00:54:27.241194 | orchestrator | 2026-04-07 00:54:27.241224 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-07 00:54:27.241232 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:00.318) 0:00:08.946 ********* 2026-04-07 00:54:27.241239 | orchestrator | ok: [testbed-manager] 2026-04-07 00:54:27.241260 | orchestrator | 2026-04-07 00:54:27.241276 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:54:27.241284 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:54:27.241291 | orchestrator | 2026-04-07 00:54:27.241299 | orchestrator | 2026-04-07 00:54:27.241306 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:54:27.241313 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:00.357) 0:00:09.303 ********* 2026-04-07 00:54:27.241320 | orchestrator | =============================================================================== 2026-04-07 00:54:27.241327 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.45s 2026-04-07 00:54:27.241334 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.46s 2026-04-07 00:54:27.241342 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.14s 2026-04-07 00:54:27.241361 | orchestrator | Get home directory of operator user ------------------------------------- 0.94s 2026-04-07 00:54:27.241369 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2026-04-07 00:54:27.241376 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.79s 2026-04-07 00:54:27.241383 | orchestrator | Create .kube directory -------------------------------------------------- 0.70s 2026-04-07 00:54:27.241390 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2026-04-07 00:54:27.241398 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2026-04-07 00:54:27.241405 | orchestrator | 2026-04-07 00:54:27.241412 | orchestrator | 2026-04-07 00:54:27.241419 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-07 00:54:27.241426 | orchestrator | 2026-04-07 00:54:27.241434 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-07 00:54:27.241441 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:00.086) 0:00:00.086 ********* 2026-04-07 00:54:27.241448 | orchestrator | ok: [localhost] => { 2026-04-07 00:54:27.241456 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-07 00:54:27.241464 | orchestrator | } 2026-04-07 00:54:27.241471 | orchestrator | 2026-04-07 00:54:27.241479 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-07 00:54:27.241486 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:00.033) 0:00:00.120 ********* 2026-04-07 00:54:27.241495 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-07 00:54:27.241503 | orchestrator | ...ignoring 2026-04-07 00:54:27.241517 | orchestrator | 2026-04-07 00:54:27.241524 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-07 00:54:27.241531 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:03.121) 0:00:03.241 ********* 2026-04-07 00:54:27.241539 | orchestrator | skipping: [localhost] 2026-04-07 00:54:27.241546 | orchestrator | 2026-04-07 00:54:27.241553 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-07 00:54:27.241560 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:00.066) 0:00:03.308 ********* 2026-04-07 00:54:27.241567 | orchestrator | ok: [localhost] 2026-04-07 00:54:27.241574 | orchestrator | 2026-04-07 00:54:27.241581 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:54:27.241589 | orchestrator | 2026-04-07 00:54:27.241596 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:54:27.241603 | orchestrator | Tuesday 07 April 2026 00:52:12 +0000 (0:00:00.327) 0:00:03.636 ********* 2026-04-07 00:54:27.241610 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:54:27.241617 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:54:27.241624 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:54:27.241632 | orchestrator | 2026-04-07 00:54:27.241639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:54:27.241646 | orchestrator | Tuesday 07 April 2026 00:52:12 +0000 (0:00:00.642) 0:00:04.278 ********* 2026-04-07 00:54:27.241653 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-07 00:54:27.241661 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-07 00:54:27.241668 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-07 00:54:27.241675 | orchestrator | 2026-04-07 00:54:27.241682 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-07 00:54:27.241689 | orchestrator | 2026-04-07 00:54:27.241696 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 00:54:27.241703 | orchestrator | Tuesday 07 April 2026 00:52:13 +0000 (0:00:00.839) 0:00:05.118 ********* 2026-04-07 00:54:27.241711 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:54:27.241718 | orchestrator | 2026-04-07 00:54:27.241726 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 00:54:27.241733 | orchestrator | Tuesday 07 April 2026 00:52:14 +0000 (0:00:00.960) 0:00:06.078 ********* 2026-04-07 00:54:27.241740 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:54:27.241747 | orchestrator | 2026-04-07 00:54:27.241754 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-07 00:54:27.241762 | orchestrator | Tuesday 07 April 2026 00:52:16 +0000 (0:00:02.405) 0:00:08.483 ********* 2026-04-07 00:54:27.241769 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.241776 | orchestrator | 2026-04-07 00:54:27.241783 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-07 00:54:27.241818 | orchestrator | Tuesday 07 April 2026 00:52:17 +0000 (0:00:00.940) 0:00:09.424 ********* 2026-04-07 00:54:27.241827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.241834 | orchestrator | 2026-04-07 00:54:27.241841 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-07 00:54:27.241848 | orchestrator | Tuesday 07 April 2026 00:52:18 +0000 (0:00:00.363) 0:00:09.787 ********* 2026-04-07 00:54:27.241855 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.241863 | orchestrator | 2026-04-07 00:54:27.241870 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-07 00:54:27.241877 | orchestrator | Tuesday 07 April 2026 00:52:18 +0000 (0:00:00.503) 0:00:10.291 ********* 2026-04-07 00:54:27.241884 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.241891 | orchestrator | 2026-04-07 00:54:27.241898 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 00:54:27.241906 | orchestrator | Tuesday 07 April 2026 00:52:19 +0000 (0:00:00.413) 0:00:10.704 ********* 2026-04-07 00:54:27.241913 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:54:27.241926 | orchestrator | 2026-04-07 00:54:27.241944 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 00:54:27.241964 | orchestrator | Tuesday 07 April 2026 00:52:20 +0000 (0:00:01.145) 0:00:11.849 ********* 2026-04-07 00:54:27.241972 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:54:27.241979 | orchestrator | 2026-04-07 00:54:27.241987 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-07 00:54:27.241994 | orchestrator | Tuesday 07 April 2026 00:52:21 +0000 (0:00:00.908) 0:00:12.758 ********* 2026-04-07 00:54:27.242001 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.242009 | orchestrator | 2026-04-07 00:54:27.242057 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-07 00:54:27.242065 | orchestrator | Tuesday 07 April 2026 00:52:22 +0000 (0:00:01.030) 0:00:13.788 ********* 2026-04-07 00:54:27.242072 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.242080 | orchestrator | 2026-04-07 00:54:27.242087 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-07 00:54:27.242094 | orchestrator | Tuesday 07 April 2026 00:52:22 +0000 (0:00:00.559) 0:00:14.348 ********* 2026-04-07 00:54:27.242106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242145 | orchestrator | 2026-04-07 00:54:27.242156 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-07 00:54:27.242168 | orchestrator | Tuesday 07 April 2026 00:52:25 +0000 (0:00:03.048) 0:00:17.397 ********* 2026-04-07 00:54:27.242191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242238 | orchestrator | 2026-04-07 00:54:27.242249 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-07 00:54:27.242275 | orchestrator | Tuesday 07 April 2026 00:52:27 +0000 (0:00:01.744) 0:00:19.141 ********* 2026-04-07 00:54:27.242287 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 00:54:27.242298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 00:54:27.242309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 00:54:27.242320 | orchestrator | 2026-04-07 00:54:27.242331 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-07 00:54:27.242343 | orchestrator | Tuesday 07 April 2026 00:52:29 +0000 (0:00:01.943) 0:00:21.085 ********* 2026-04-07 00:54:27.242355 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 00:54:27.242367 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 00:54:27.242379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 00:54:27.242392 | orchestrator | 2026-04-07 00:54:27.242405 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-07 00:54:27.242424 | orchestrator | Tuesday 07 April 2026 00:52:32 +0000 (0:00:02.973) 0:00:24.059 ********* 2026-04-07 00:54:27.242437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 00:54:27.242445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 00:54:27.242452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 00:54:27.242459 | orchestrator | 2026-04-07 00:54:27.242466 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-07 00:54:27.242474 | orchestrator | Tuesday 07 April 2026 00:52:34 +0000 (0:00:02.187) 0:00:26.246 ********* 2026-04-07 00:54:27.242481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 00:54:27.242488 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 00:54:27.242495 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 00:54:27.242502 | orchestrator | 2026-04-07 00:54:27.242509 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-07 00:54:27.242516 | orchestrator | Tuesday 07 April 2026 00:52:36 +0000 (0:00:01.671) 0:00:27.917 ********* 2026-04-07 00:54:27.242524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 00:54:27.242531 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 00:54:27.242538 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 00:54:27.242545 | orchestrator | 2026-04-07 00:54:27.242552 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-07 00:54:27.242559 | orchestrator | Tuesday 07 April 2026 00:52:37 +0000 (0:00:01.604) 0:00:29.522 ********* 2026-04-07 00:54:27.242566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 00:54:27.242574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 00:54:27.242581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 00:54:27.242588 | orchestrator | 2026-04-07 00:54:27.242595 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 00:54:27.242602 | orchestrator | Tuesday 07 April 2026 00:52:39 +0000 (0:00:01.642) 0:00:31.164 ********* 2026-04-07 00:54:27.242610 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.242617 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:54:27.242630 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:54:27.242638 | orchestrator | 2026-04-07 00:54:27.242645 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-07 00:54:27.242652 | orchestrator | Tuesday 07 April 2026 00:52:39 +0000 (0:00:00.380) 0:00:31.544 ********* 2026-04-07 00:54:27.242664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:54:27.242695 | orchestrator | 2026-04-07 00:54:27.242702 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-07 00:54:27.242709 | orchestrator | Tuesday 07 April 2026 00:52:41 +0000 (0:00:01.333) 0:00:32.878 ********* 2026-04-07 00:54:27.242716 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:54:27.242724 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:54:27.242731 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:54:27.242738 | orchestrator | 2026-04-07 00:54:27.242745 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-07 00:54:27.242758 | orchestrator | Tuesday 07 April 2026 00:52:42 +0000 (0:00:00.801) 0:00:33.679 ********* 2026-04-07 00:54:27.242765 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:54:27.242772 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:54:27.242779 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:54:27.242786 | orchestrator | 2026-04-07 00:54:27.242816 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-07 00:54:27.242824 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:07.065) 0:00:40.745 ********* 2026-04-07 00:54:27.242831 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:54:27.242839 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:54:27.242846 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:54:27.242853 | orchestrator | 2026-04-07 00:54:27.242860 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 00:54:27.242867 | orchestrator | 2026-04-07 00:54:27.242874 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 00:54:27.242882 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:00.336) 0:00:41.081 ********* 2026-04-07 00:54:27.242889 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:54:27.242896 | orchestrator | 2026-04-07 00:54:27.242903 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 00:54:27.242910 | orchestrator | Tuesday 07 April 2026 00:52:50 +0000 (0:00:00.549) 0:00:41.631 ********* 2026-04-07 00:54:27.242917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:54:27.242924 | orchestrator | 2026-04-07 00:54:27.242931 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 00:54:27.242938 | orchestrator | Tuesday 07 April 2026 00:52:50 +0000 (0:00:00.281) 0:00:41.912 ********* 2026-04-07 00:54:27.242945 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:54:27.242953 | orchestrator | 2026-04-07 00:54:27.242960 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 00:54:27.242967 | orchestrator | Tuesday 07 April 2026 00:52:52 +0000 (0:00:01.701) 0:00:43.614 ********* 2026-04-07 00:54:27.242974 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:54:27.242981 | orchestrator | 2026-04-07 00:54:27.242988 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 00:54:27.242995 | orchestrator | 2026-04-07 00:54:27.243002 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 00:54:27.243010 | orchestrator | Tuesday 07 April 2026 00:53:43 +0000 (0:00:51.108) 0:01:34.722 ********* 2026-04-07 00:54:27.243017 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:54:27.243028 | orchestrator | 2026-04-07 00:54:27.243035 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 00:54:27.243042 | orchestrator | Tuesday 07 April 2026 00:53:43 +0000 (0:00:00.619) 0:01:35.342 ********* 2026-04-07 00:54:27.243050 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:54:27.243057 | orchestrator | 2026-04-07 00:54:27.243064 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 00:54:27.243071 | orchestrator | Tuesday 07 April 2026 00:53:43 +0000 (0:00:00.199) 0:01:35.542 ********* 2026-04-07 00:54:27.243078 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:54:27.243085 | orchestrator | 2026-04-07 00:54:27.243093 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 00:54:27.243100 | orchestrator | Tuesday 07 April 2026 00:53:45 +0000 (0:00:01.578) 0:01:37.121 ********* 2026-04-07 00:54:27.243107 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:54:27.243114 | orchestrator | 2026-04-07 00:54:27.243122 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 00:54:27.243129 | orchestrator | 2026-04-07 00:54:27.243136 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 00:54:27.243143 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:14.666) 0:01:51.787 ********* 2026-04-07 00:54:27.243150 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:54:27.243157 | orchestrator | 2026-04-07 00:54:27.243169 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 00:54:27.243185 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:00.800) 0:01:52.588 ********* 2026-04-07 00:54:27.243193 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:54:27.243200 | orchestrator | 2026-04-07 00:54:27.243207 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 00:54:27.243214 | orchestrator | Tuesday 07 April 2026 00:54:01 +0000 (0:00:00.541) 0:01:53.129 ********* 2026-04-07 00:54:27.243221 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:54:27.243229 | orchestrator | 2026-04-07 00:54:27.243236 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 00:54:27.243243 | orchestrator | Tuesday 07 April 2026 00:54:08 +0000 (0:00:07.331) 0:02:00.461 ********* 2026-04-07 00:54:27.243250 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:54:27.243257 | orchestrator | 2026-04-07 00:54:27.243264 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-07 00:54:27.243271 | orchestrator | 2026-04-07 00:54:27.243278 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-07 00:54:27.243286 | orchestrator | Tuesday 07 April 2026 00:54:21 +0000 (0:00:12.587) 0:02:13.048 ********* 2026-04-07 00:54:27.243293 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:54:27.243300 | orchestrator | 2026-04-07 00:54:27.243307 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-07 00:54:27.243314 | orchestrator | Tuesday 07 April 2026 00:54:22 +0000 (0:00:00.622) 0:02:13.671 ********* 2026-04-07 00:54:27.243322 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:54:27.243329 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:54:27.243336 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:54:27.243343 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 00:54:27.243350 | orchestrator | enable_outward_rabbitmq_True 2026-04-07 00:54:27.243357 | orchestrator | 2026-04-07 00:54:27.243365 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-07 00:54:27.243372 | orchestrator | skipping: no hosts matched 2026-04-07 00:54:27.243379 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 00:54:27.243386 | orchestrator | outward_rabbitmq_restart 2026-04-07 00:54:27.243394 | orchestrator | 2026-04-07 00:54:27.243401 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-07 00:54:27.243408 | orchestrator | skipping: no hosts matched 2026-04-07 00:54:27.243415 | orchestrator | 2026-04-07 00:54:27.243422 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-07 00:54:27.243429 | orchestrator | skipping: no hosts matched 2026-04-07 00:54:27.243436 | orchestrator | 2026-04-07 00:54:27.243444 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:54:27.243451 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-07 00:54:27.243459 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-07 00:54:27.243466 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:54:27.243473 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 00:54:27.243480 | orchestrator | 2026-04-07 00:54:27.243487 | orchestrator | 2026-04-07 00:54:27.243494 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:54:27.243501 | orchestrator | Tuesday 07 April 2026 00:54:24 +0000 (0:00:02.164) 0:02:15.836 ********* 2026-04-07 00:54:27.243509 | orchestrator | =============================================================================== 2026-04-07 00:54:27.243521 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.36s 2026-04-07 00:54:27.243529 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.61s 2026-04-07 00:54:27.243536 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.07s 2026-04-07 00:54:27.243543 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.12s 2026-04-07 00:54:27.243550 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 3.05s 2026-04-07 00:54:27.243561 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.97s 2026-04-07 00:54:27.243568 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.41s 2026-04-07 00:54:27.243576 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.19s 2026-04-07 00:54:27.243583 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.16s 2026-04-07 00:54:27.243590 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2026-04-07 00:54:27.243597 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.94s 2026-04-07 00:54:27.243604 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.74s 2026-04-07 00:54:27.243611 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.67s 2026-04-07 00:54:27.243619 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.64s 2026-04-07 00:54:27.243626 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.60s 2026-04-07 00:54:27.243633 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.33s 2026-04-07 00:54:27.243640 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2026-04-07 00:54:27.243652 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.03s 2026-04-07 00:54:27.243659 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.02s 2026-04-07 00:54:27.243666 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.96s 2026-04-07 00:54:27.243673 | orchestrator | 2026-04-07 00:54:27 | INFO  | Task d1fd439e-045c-437f-a0a3-24309ff08196 is in state SUCCESS 2026-04-07 00:54:27.243681 | orchestrator | 2026-04-07 00:54:27 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:27.243689 | orchestrator | 2026-04-07 00:54:27 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:27.243696 | orchestrator | 2026-04-07 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:30.286243 | orchestrator | 2026-04-07 00:54:30 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:30.291347 | orchestrator | 2026-04-07 00:54:30 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:30.291963 | orchestrator | 2026-04-07 00:54:30 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:30.292479 | orchestrator | 2026-04-07 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:33.338467 | orchestrator | 2026-04-07 00:54:33 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:33.340543 | orchestrator | 2026-04-07 00:54:33 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:33.343506 | orchestrator | 2026-04-07 00:54:33 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:33.343785 | orchestrator | 2026-04-07 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:36.391460 | orchestrator | 2026-04-07 00:54:36 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:36.394238 | orchestrator | 2026-04-07 00:54:36 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:36.394333 | orchestrator | 2026-04-07 00:54:36 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:36.394343 | orchestrator | 2026-04-07 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:39.441962 | orchestrator | 2026-04-07 00:54:39 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:39.442976 | orchestrator | 2026-04-07 00:54:39 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:39.445651 | orchestrator | 2026-04-07 00:54:39 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:39.445718 | orchestrator | 2026-04-07 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:42.486533 | orchestrator | 2026-04-07 00:54:42 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:42.486609 | orchestrator | 2026-04-07 00:54:42 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:42.488069 | orchestrator | 2026-04-07 00:54:42 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:42.488108 | orchestrator | 2026-04-07 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:45.534277 | orchestrator | 2026-04-07 00:54:45 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:45.536651 | orchestrator | 2026-04-07 00:54:45 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:45.538523 | orchestrator | 2026-04-07 00:54:45 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:45.538602 | orchestrator | 2026-04-07 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:48.573486 | orchestrator | 2026-04-07 00:54:48 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:48.574388 | orchestrator | 2026-04-07 00:54:48 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:48.575311 | orchestrator | 2026-04-07 00:54:48 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:48.575346 | orchestrator | 2026-04-07 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:51.623620 | orchestrator | 2026-04-07 00:54:51 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:51.625102 | orchestrator | 2026-04-07 00:54:51 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:51.627467 | orchestrator | 2026-04-07 00:54:51 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:51.627533 | orchestrator | 2026-04-07 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:54.663517 | orchestrator | 2026-04-07 00:54:54 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:54.667099 | orchestrator | 2026-04-07 00:54:54 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:54.668309 | orchestrator | 2026-04-07 00:54:54 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:54.668367 | orchestrator | 2026-04-07 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:54:57.720818 | orchestrator | 2026-04-07 00:54:57 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:54:57.723925 | orchestrator | 2026-04-07 00:54:57 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:54:57.728547 | orchestrator | 2026-04-07 00:54:57 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:54:57.728774 | orchestrator | 2026-04-07 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:00.772278 | orchestrator | 2026-04-07 00:55:00 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:00.774397 | orchestrator | 2026-04-07 00:55:00 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:00.776202 | orchestrator | 2026-04-07 00:55:00 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:00.776317 | orchestrator | 2026-04-07 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:03.807065 | orchestrator | 2026-04-07 00:55:03 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:03.809586 | orchestrator | 2026-04-07 00:55:03 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:03.811048 | orchestrator | 2026-04-07 00:55:03 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:03.811179 | orchestrator | 2026-04-07 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:06.858787 | orchestrator | 2026-04-07 00:55:06 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:06.860822 | orchestrator | 2026-04-07 00:55:06 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:06.862924 | orchestrator | 2026-04-07 00:55:06 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:06.862998 | orchestrator | 2026-04-07 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:09.915402 | orchestrator | 2026-04-07 00:55:09 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:09.917785 | orchestrator | 2026-04-07 00:55:09 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:09.920226 | orchestrator | 2026-04-07 00:55:09 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:09.920279 | orchestrator | 2026-04-07 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:12.958221 | orchestrator | 2026-04-07 00:55:12 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:12.960935 | orchestrator | 2026-04-07 00:55:12 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:12.960977 | orchestrator | 2026-04-07 00:55:12 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:12.960983 | orchestrator | 2026-04-07 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:15.996299 | orchestrator | 2026-04-07 00:55:15 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:15.997512 | orchestrator | 2026-04-07 00:55:15 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state STARTED 2026-04-07 00:55:15.998722 | orchestrator | 2026-04-07 00:55:15 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:15.998750 | orchestrator | 2026-04-07 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:19.098948 | orchestrator | 2026-04-07 00:55:19 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:19.099826 | orchestrator | 2026-04-07 00:55:19 | INFO  | Task d1001d90-56f6-41b1-9236-bca720988367 is in state SUCCESS 2026-04-07 00:55:19.101270 | orchestrator | 2026-04-07 00:55:19.101313 | orchestrator | 2026-04-07 00:55:19.101319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:55:19.101341 | orchestrator | 2026-04-07 00:55:19.101345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:55:19.101350 | orchestrator | Tuesday 07 April 2026 00:52:58 +0000 (0:00:00.271) 0:00:00.271 ********* 2026-04-07 00:55:19.101354 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.101374 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.101378 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.101382 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:55:19.101386 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:55:19.101390 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:55:19.101393 | orchestrator | 2026-04-07 00:55:19.101398 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:55:19.101402 | orchestrator | Tuesday 07 April 2026 00:52:59 +0000 (0:00:00.685) 0:00:00.957 ********* 2026-04-07 00:55:19.101406 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-07 00:55:19.101410 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-07 00:55:19.101414 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-07 00:55:19.101417 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-07 00:55:19.101421 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-07 00:55:19.101425 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-07 00:55:19.101429 | orchestrator | 2026-04-07 00:55:19.101432 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-07 00:55:19.101436 | orchestrator | 2026-04-07 00:55:19.101440 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-07 00:55:19.101444 | orchestrator | Tuesday 07 April 2026 00:53:01 +0000 (0:00:02.130) 0:00:03.088 ********* 2026-04-07 00:55:19.101449 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:55:19.101467 | orchestrator | 2026-04-07 00:55:19.101471 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-07 00:55:19.101475 | orchestrator | Tuesday 07 April 2026 00:53:02 +0000 (0:00:01.269) 0:00:04.357 ********* 2026-04-07 00:55:19.101481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101554 | orchestrator | 2026-04-07 00:55:19.101567 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-07 00:55:19.101571 | orchestrator | Tuesday 07 April 2026 00:53:03 +0000 (0:00:01.396) 0:00:05.754 ********* 2026-04-07 00:55:19.101575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101598 | orchestrator | 2026-04-07 00:55:19.101602 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-07 00:55:19.101606 | orchestrator | Tuesday 07 April 2026 00:53:05 +0000 (0:00:01.509) 0:00:07.263 ********* 2026-04-07 00:55:19.101619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101646 | orchestrator | 2026-04-07 00:55:19.101649 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-07 00:55:19.101653 | orchestrator | Tuesday 07 April 2026 00:53:06 +0000 (0:00:01.545) 0:00:08.809 ********* 2026-04-07 00:55:19.101657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101686 | orchestrator | 2026-04-07 00:55:19.101691 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-07 00:55:19.101695 | orchestrator | Tuesday 07 April 2026 00:53:08 +0000 (0:00:01.637) 0:00:10.447 ********* 2026-04-07 00:55:19.101699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.101798 | orchestrator | 2026-04-07 00:55:19.101804 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-07 00:55:19.101810 | orchestrator | Tuesday 07 April 2026 00:53:10 +0000 (0:00:01.573) 0:00:12.021 ********* 2026-04-07 00:55:19.101816 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.101822 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.101828 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.101834 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:55:19.101840 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:55:19.101846 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:55:19.101852 | orchestrator | 2026-04-07 00:55:19.101858 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-07 00:55:19.101867 | orchestrator | Tuesday 07 April 2026 00:53:12 +0000 (0:00:02.695) 0:00:14.716 ********* 2026-04-07 00:55:19.101873 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-07 00:55:19.101879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-07 00:55:19.101885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-07 00:55:19.101890 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-07 00:55:19.101916 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-07 00:55:19.101922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-07 00:55:19.101928 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101934 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101945 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101951 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101956 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101962 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 00:55:19.101968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.101976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.101982 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.101988 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.101994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.102000 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102008 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 00:55:19.102075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102091 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102103 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102109 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 00:55:19.102115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102127 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102138 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 00:55:19.102151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 00:55:19.102157 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102163 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-07 00:55:19.102170 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 00:55:19.102199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 00:55:19.102205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 00:55:19.102211 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 00:55:19.102217 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 00:55:19.102222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 00:55:19.102228 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 00:55:19.102240 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-07 00:55:19.102247 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-07 00:55:19.102253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-07 00:55:19.102259 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-07 00:55:19.102272 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-07 00:55:19.102278 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 00:55:19.102284 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 00:55:19.102288 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 00:55:19.102292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 00:55:19.102296 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 00:55:19.102300 | orchestrator | 2026-04-07 00:55:19.102304 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102307 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:19.563) 0:00:34.279 ********* 2026-04-07 00:55:19.102311 | orchestrator | 2026-04-07 00:55:19.102315 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102319 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.071) 0:00:34.351 ********* 2026-04-07 00:55:19.102322 | orchestrator | 2026-04-07 00:55:19.102326 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102330 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.064) 0:00:34.415 ********* 2026-04-07 00:55:19.102334 | orchestrator | 2026-04-07 00:55:19.102337 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102341 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.064) 0:00:34.479 ********* 2026-04-07 00:55:19.102345 | orchestrator | 2026-04-07 00:55:19.102349 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102352 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.067) 0:00:34.547 ********* 2026-04-07 00:55:19.102356 | orchestrator | 2026-04-07 00:55:19.102360 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 00:55:19.102364 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.075) 0:00:34.622 ********* 2026-04-07 00:55:19.102367 | orchestrator | 2026-04-07 00:55:19.102371 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-07 00:55:19.102375 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.063) 0:00:34.686 ********* 2026-04-07 00:55:19.102378 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102385 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:55:19.102391 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102397 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102403 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:55:19.102408 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:55:19.102413 | orchestrator | 2026-04-07 00:55:19.102419 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-07 00:55:19.102424 | orchestrator | Tuesday 07 April 2026 00:53:34 +0000 (0:00:02.114) 0:00:36.801 ********* 2026-04-07 00:55:19.102430 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.102436 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.102442 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.102448 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:55:19.102453 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:55:19.102459 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:55:19.102466 | orchestrator | 2026-04-07 00:55:19.102472 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-07 00:55:19.102477 | orchestrator | 2026-04-07 00:55:19.102483 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 00:55:19.102492 | orchestrator | Tuesday 07 April 2026 00:53:57 +0000 (0:00:22.768) 0:00:59.569 ********* 2026-04-07 00:55:19.102504 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:55:19.102510 | orchestrator | 2026-04-07 00:55:19.102516 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 00:55:19.102522 | orchestrator | Tuesday 07 April 2026 00:53:58 +0000 (0:00:00.604) 0:01:00.174 ********* 2026-04-07 00:55:19.102528 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:55:19.102533 | orchestrator | 2026-04-07 00:55:19.102539 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-07 00:55:19.102544 | orchestrator | Tuesday 07 April 2026 00:53:59 +0000 (0:00:01.158) 0:01:01.332 ********* 2026-04-07 00:55:19.102549 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102555 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102560 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102566 | orchestrator | 2026-04-07 00:55:19.102571 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-07 00:55:19.102577 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:01.454) 0:01:02.787 ********* 2026-04-07 00:55:19.102582 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102587 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102593 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102602 | orchestrator | 2026-04-07 00:55:19.102608 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-07 00:55:19.102613 | orchestrator | Tuesday 07 April 2026 00:54:01 +0000 (0:00:00.726) 0:01:03.513 ********* 2026-04-07 00:55:19.102618 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102624 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102629 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102635 | orchestrator | 2026-04-07 00:55:19.102641 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-07 00:55:19.102647 | orchestrator | Tuesday 07 April 2026 00:54:02 +0000 (0:00:01.123) 0:01:04.637 ********* 2026-04-07 00:55:19.102653 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102658 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102664 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102670 | orchestrator | 2026-04-07 00:55:19.102676 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-07 00:55:19.102682 | orchestrator | Tuesday 07 April 2026 00:54:03 +0000 (0:00:00.560) 0:01:05.198 ********* 2026-04-07 00:55:19.102688 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.102694 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.102701 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.102704 | orchestrator | 2026-04-07 00:55:19.102708 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-07 00:55:19.102712 | orchestrator | Tuesday 07 April 2026 00:54:03 +0000 (0:00:00.286) 0:01:05.484 ********* 2026-04-07 00:55:19.102716 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102720 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102724 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102727 | orchestrator | 2026-04-07 00:55:19.102731 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-07 00:55:19.102735 | orchestrator | Tuesday 07 April 2026 00:54:03 +0000 (0:00:00.307) 0:01:05.791 ********* 2026-04-07 00:55:19.102739 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102743 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102746 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102750 | orchestrator | 2026-04-07 00:55:19.102754 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-07 00:55:19.102758 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:00.406) 0:01:06.198 ********* 2026-04-07 00:55:19.102761 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102766 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102769 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102778 | orchestrator | 2026-04-07 00:55:19.102782 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-07 00:55:19.102786 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:00.226) 0:01:06.424 ********* 2026-04-07 00:55:19.102790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102793 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102797 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102801 | orchestrator | 2026-04-07 00:55:19.102804 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-07 00:55:19.102808 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:00.255) 0:01:06.679 ********* 2026-04-07 00:55:19.102812 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102816 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102819 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102823 | orchestrator | 2026-04-07 00:55:19.102827 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-07 00:55:19.102833 | orchestrator | Tuesday 07 April 2026 00:54:05 +0000 (0:00:00.368) 0:01:07.048 ********* 2026-04-07 00:55:19.102839 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102844 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102854 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102863 | orchestrator | 2026-04-07 00:55:19.102868 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-07 00:55:19.102874 | orchestrator | Tuesday 07 April 2026 00:54:05 +0000 (0:00:00.434) 0:01:07.482 ********* 2026-04-07 00:55:19.102879 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102885 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102890 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102938 | orchestrator | 2026-04-07 00:55:19.102944 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-07 00:55:19.102950 | orchestrator | Tuesday 07 April 2026 00:54:05 +0000 (0:00:00.249) 0:01:07.732 ********* 2026-04-07 00:55:19.102956 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.102961 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.102967 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.102972 | orchestrator | 2026-04-07 00:55:19.102978 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-07 00:55:19.102984 | orchestrator | Tuesday 07 April 2026 00:54:06 +0000 (0:00:00.269) 0:01:08.002 ********* 2026-04-07 00:55:19.102995 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103001 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103006 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103011 | orchestrator | 2026-04-07 00:55:19.103018 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-07 00:55:19.103024 | orchestrator | Tuesday 07 April 2026 00:54:06 +0000 (0:00:00.302) 0:01:08.304 ********* 2026-04-07 00:55:19.103030 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103036 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103042 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103049 | orchestrator | 2026-04-07 00:55:19.103053 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-07 00:55:19.103057 | orchestrator | Tuesday 07 April 2026 00:54:06 +0000 (0:00:00.227) 0:01:08.531 ********* 2026-04-07 00:55:19.103061 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103064 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103068 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103072 | orchestrator | 2026-04-07 00:55:19.103076 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-07 00:55:19.103079 | orchestrator | Tuesday 07 April 2026 00:54:07 +0000 (0:00:00.509) 0:01:09.041 ********* 2026-04-07 00:55:19.103083 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103087 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103098 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103109 | orchestrator | 2026-04-07 00:55:19.103113 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 00:55:19.103116 | orchestrator | Tuesday 07 April 2026 00:54:07 +0000 (0:00:00.239) 0:01:09.281 ********* 2026-04-07 00:55:19.103120 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:55:19.103124 | orchestrator | 2026-04-07 00:55:19.103128 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-07 00:55:19.103132 | orchestrator | Tuesday 07 April 2026 00:54:07 +0000 (0:00:00.427) 0:01:09.708 ********* 2026-04-07 00:55:19.103136 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103139 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103143 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103147 | orchestrator | 2026-04-07 00:55:19.103150 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-07 00:55:19.103154 | orchestrator | Tuesday 07 April 2026 00:54:08 +0000 (0:00:00.674) 0:01:10.383 ********* 2026-04-07 00:55:19.103158 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103162 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103166 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103169 | orchestrator | 2026-04-07 00:55:19.103173 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-07 00:55:19.103177 | orchestrator | Tuesday 07 April 2026 00:54:09 +0000 (0:00:00.584) 0:01:10.968 ********* 2026-04-07 00:55:19.103181 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103184 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103188 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103192 | orchestrator | 2026-04-07 00:55:19.103196 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-07 00:55:19.103199 | orchestrator | Tuesday 07 April 2026 00:54:09 +0000 (0:00:00.485) 0:01:11.453 ********* 2026-04-07 00:55:19.103203 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103207 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103210 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103214 | orchestrator | 2026-04-07 00:55:19.103218 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-07 00:55:19.103222 | orchestrator | Tuesday 07 April 2026 00:54:09 +0000 (0:00:00.310) 0:01:11.763 ********* 2026-04-07 00:55:19.103226 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103229 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103233 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103237 | orchestrator | 2026-04-07 00:55:19.103241 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-07 00:55:19.103245 | orchestrator | Tuesday 07 April 2026 00:54:10 +0000 (0:00:00.522) 0:01:12.286 ********* 2026-04-07 00:55:19.103248 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103252 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103260 | orchestrator | 2026-04-07 00:55:19.103264 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-07 00:55:19.103267 | orchestrator | Tuesday 07 April 2026 00:54:10 +0000 (0:00:00.351) 0:01:12.637 ********* 2026-04-07 00:55:19.103271 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103275 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103279 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103282 | orchestrator | 2026-04-07 00:55:19.103286 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-07 00:55:19.103290 | orchestrator | Tuesday 07 April 2026 00:54:11 +0000 (0:00:00.283) 0:01:12.920 ********* 2026-04-07 00:55:19.103293 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103297 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103301 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103305 | orchestrator | 2026-04-07 00:55:19.103308 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-07 00:55:19.103317 | orchestrator | Tuesday 07 April 2026 00:54:11 +0000 (0:00:00.268) 0:01:13.189 ********* 2026-04-07 00:55:19.103322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103377 | orchestrator | 2026-04-07 00:55:19.103382 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-07 00:55:19.103394 | orchestrator | Tuesday 07 April 2026 00:54:12 +0000 (0:00:01.645) 0:01:14.834 ********* 2026-04-07 00:55:19.103400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103461 | orchestrator | 2026-04-07 00:55:19.103465 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-07 00:55:19.103469 | orchestrator | Tuesday 07 April 2026 00:54:16 +0000 (0:00:03.832) 0:01:18.667 ********* 2026-04-07 00:55:19.103477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103521 | orchestrator | 2026-04-07 00:55:19.103525 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.103529 | orchestrator | Tuesday 07 April 2026 00:54:19 +0000 (0:00:02.466) 0:01:21.133 ********* 2026-04-07 00:55:19.103537 | orchestrator | 2026-04-07 00:55:19.103540 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.103544 | orchestrator | Tuesday 07 April 2026 00:54:19 +0000 (0:00:00.069) 0:01:21.202 ********* 2026-04-07 00:55:19.103548 | orchestrator | 2026-04-07 00:55:19.103552 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.103555 | orchestrator | Tuesday 07 April 2026 00:54:19 +0000 (0:00:00.070) 0:01:21.273 ********* 2026-04-07 00:55:19.103559 | orchestrator | 2026-04-07 00:55:19.103563 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-07 00:55:19.103567 | orchestrator | Tuesday 07 April 2026 00:54:19 +0000 (0:00:00.070) 0:01:21.343 ********* 2026-04-07 00:55:19.103571 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.103575 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.103579 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.103583 | orchestrator | 2026-04-07 00:55:19.103589 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-07 00:55:19.103595 | orchestrator | Tuesday 07 April 2026 00:54:26 +0000 (0:00:07.333) 0:01:28.677 ********* 2026-04-07 00:55:19.103605 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.103611 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.103617 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.103623 | orchestrator | 2026-04-07 00:55:19.103629 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-07 00:55:19.103635 | orchestrator | Tuesday 07 April 2026 00:54:29 +0000 (0:00:02.811) 0:01:31.488 ********* 2026-04-07 00:55:19.103640 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.103645 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.103651 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.103657 | orchestrator | 2026-04-07 00:55:19.103663 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-07 00:55:19.103668 | orchestrator | Tuesday 07 April 2026 00:54:37 +0000 (0:00:07.631) 0:01:39.120 ********* 2026-04-07 00:55:19.103674 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.103680 | orchestrator | 2026-04-07 00:55:19.103686 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-07 00:55:19.103691 | orchestrator | Tuesday 07 April 2026 00:54:37 +0000 (0:00:00.230) 0:01:39.350 ********* 2026-04-07 00:55:19.103698 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103703 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103710 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103717 | orchestrator | 2026-04-07 00:55:19.103722 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-07 00:55:19.103733 | orchestrator | Tuesday 07 April 2026 00:54:38 +0000 (0:00:00.854) 0:01:40.205 ********* 2026-04-07 00:55:19.103739 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103743 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103747 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.103750 | orchestrator | 2026-04-07 00:55:19.103754 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-07 00:55:19.103758 | orchestrator | Tuesday 07 April 2026 00:54:38 +0000 (0:00:00.599) 0:01:40.805 ********* 2026-04-07 00:55:19.103762 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103766 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103769 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103773 | orchestrator | 2026-04-07 00:55:19.103777 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-07 00:55:19.103780 | orchestrator | Tuesday 07 April 2026 00:54:40 +0000 (0:00:01.080) 0:01:41.885 ********* 2026-04-07 00:55:19.103784 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.103788 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.103792 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.103796 | orchestrator | 2026-04-07 00:55:19.103800 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-07 00:55:19.103803 | orchestrator | Tuesday 07 April 2026 00:54:40 +0000 (0:00:00.608) 0:01:42.494 ********* 2026-04-07 00:55:19.103812 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103816 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103824 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103828 | orchestrator | 2026-04-07 00:55:19.103832 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-07 00:55:19.103836 | orchestrator | Tuesday 07 April 2026 00:54:41 +0000 (0:00:00.879) 0:01:43.373 ********* 2026-04-07 00:55:19.103840 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103843 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103847 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103851 | orchestrator | 2026-04-07 00:55:19.103855 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-07 00:55:19.103858 | orchestrator | Tuesday 07 April 2026 00:54:42 +0000 (0:00:00.874) 0:01:44.247 ********* 2026-04-07 00:55:19.103862 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.103866 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.103870 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.103873 | orchestrator | 2026-04-07 00:55:19.103877 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-07 00:55:19.103881 | orchestrator | Tuesday 07 April 2026 00:54:42 +0000 (0:00:00.474) 0:01:44.722 ********* 2026-04-07 00:55:19.103885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103889 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103918 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103925 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103960 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103973 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103978 | orchestrator | 2026-04-07 00:55:19.103981 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-07 00:55:19.103985 | orchestrator | Tuesday 07 April 2026 00:54:44 +0000 (0:00:01.694) 0:01:46.416 ********* 2026-04-07 00:55:19.103989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103993 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.103997 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104001 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104038 | orchestrator | 2026-04-07 00:55:19.104043 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-07 00:55:19.104049 | orchestrator | Tuesday 07 April 2026 00:54:48 +0000 (0:00:03.997) 0:01:50.414 ********* 2026-04-07 00:55:19.104060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104068 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104072 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104084 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 00:55:19.104103 | orchestrator | 2026-04-07 00:55:19.104107 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.104110 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:03.642) 0:01:54.057 ********* 2026-04-07 00:55:19.104114 | orchestrator | 2026-04-07 00:55:19.104118 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.104122 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:00.063) 0:01:54.121 ********* 2026-04-07 00:55:19.104126 | orchestrator | 2026-04-07 00:55:19.104130 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 00:55:19.104133 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:00.247) 0:01:54.369 ********* 2026-04-07 00:55:19.104137 | orchestrator | 2026-04-07 00:55:19.104141 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-07 00:55:19.104145 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:00.067) 0:01:54.436 ********* 2026-04-07 00:55:19.104149 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.104153 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.104157 | orchestrator | 2026-04-07 00:55:19.104164 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-07 00:55:19.104168 | orchestrator | Tuesday 07 April 2026 00:54:59 +0000 (0:00:06.465) 0:02:00.901 ********* 2026-04-07 00:55:19.104171 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.104175 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.104179 | orchestrator | 2026-04-07 00:55:19.104183 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-07 00:55:19.104187 | orchestrator | Tuesday 07 April 2026 00:55:05 +0000 (0:00:06.566) 0:02:07.468 ********* 2026-04-07 00:55:19.104190 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:55:19.104194 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:55:19.104198 | orchestrator | 2026-04-07 00:55:19.104202 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-07 00:55:19.104206 | orchestrator | Tuesday 07 April 2026 00:55:11 +0000 (0:00:06.255) 0:02:13.723 ********* 2026-04-07 00:55:19.104210 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:55:19.104214 | orchestrator | 2026-04-07 00:55:19.104217 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-07 00:55:19.104221 | orchestrator | Tuesday 07 April 2026 00:55:12 +0000 (0:00:00.198) 0:02:13.921 ********* 2026-04-07 00:55:19.104225 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.104229 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.104233 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.104236 | orchestrator | 2026-04-07 00:55:19.104240 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-07 00:55:19.104244 | orchestrator | Tuesday 07 April 2026 00:55:12 +0000 (0:00:00.810) 0:02:14.732 ********* 2026-04-07 00:55:19.104248 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.104252 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.104255 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.104259 | orchestrator | 2026-04-07 00:55:19.104263 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-07 00:55:19.104271 | orchestrator | Tuesday 07 April 2026 00:55:13 +0000 (0:00:01.026) 0:02:15.758 ********* 2026-04-07 00:55:19.104275 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.104279 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.104282 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.104286 | orchestrator | 2026-04-07 00:55:19.104290 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-07 00:55:19.104294 | orchestrator | Tuesday 07 April 2026 00:55:14 +0000 (0:00:00.924) 0:02:16.683 ********* 2026-04-07 00:55:19.104298 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:55:19.104301 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:55:19.104305 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:55:19.104309 | orchestrator | 2026-04-07 00:55:19.104313 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-07 00:55:19.104316 | orchestrator | Tuesday 07 April 2026 00:55:15 +0000 (0:00:00.668) 0:02:17.351 ********* 2026-04-07 00:55:19.104320 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.104324 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.104328 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.104331 | orchestrator | 2026-04-07 00:55:19.104335 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-07 00:55:19.104339 | orchestrator | Tuesday 07 April 2026 00:55:16 +0000 (0:00:00.979) 0:02:18.331 ********* 2026-04-07 00:55:19.104343 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:55:19.104347 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:55:19.104350 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:55:19.104354 | orchestrator | 2026-04-07 00:55:19.104358 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:55:19.104362 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-07 00:55:19.104366 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-07 00:55:19.104370 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-07 00:55:19.104374 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:55:19.104378 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:55:19.104385 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 00:55:19.104389 | orchestrator | 2026-04-07 00:55:19.104393 | orchestrator | 2026-04-07 00:55:19.104396 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:55:19.104400 | orchestrator | Tuesday 07 April 2026 00:55:17 +0000 (0:00:01.316) 0:02:19.648 ********* 2026-04-07 00:55:19.104404 | orchestrator | =============================================================================== 2026-04-07 00:55:19.104408 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 22.77s 2026-04-07 00:55:19.104412 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.56s 2026-04-07 00:55:19.104415 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.89s 2026-04-07 00:55:19.104419 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.80s 2026-04-07 00:55:19.104423 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.38s 2026-04-07 00:55:19.104427 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2026-04-07 00:55:19.104431 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-04-07 00:55:19.104438 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.64s 2026-04-07 00:55:19.104448 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2026-04-07 00:55:19.104453 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.47s 2026-04-07 00:55:19.104459 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.13s 2026-04-07 00:55:19.104464 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.11s 2026-04-07 00:55:19.104469 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2026-04-07 00:55:19.104475 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2026-04-07 00:55:19.104481 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.64s 2026-04-07 00:55:19.104486 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2026-04-07 00:55:19.104491 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.55s 2026-04-07 00:55:19.104497 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-04-07 00:55:19.104502 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.45s 2026-04-07 00:55:19.104508 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.40s 2026-04-07 00:55:19.104515 | orchestrator | 2026-04-07 00:55:19 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:19.104521 | orchestrator | 2026-04-07 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:22.147701 | orchestrator | 2026-04-07 00:55:22 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:22.148839 | orchestrator | 2026-04-07 00:55:22 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:22.148999 | orchestrator | 2026-04-07 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:25.194647 | orchestrator | 2026-04-07 00:55:25 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:25.196117 | orchestrator | 2026-04-07 00:55:25 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:25.196307 | orchestrator | 2026-04-07 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:28.244331 | orchestrator | 2026-04-07 00:55:28 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:28.245938 | orchestrator | 2026-04-07 00:55:28 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:28.245974 | orchestrator | 2026-04-07 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:31.297489 | orchestrator | 2026-04-07 00:55:31 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:31.298779 | orchestrator | 2026-04-07 00:55:31 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:31.298832 | orchestrator | 2026-04-07 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:34.335847 | orchestrator | 2026-04-07 00:55:34 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:34.336259 | orchestrator | 2026-04-07 00:55:34 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:34.336286 | orchestrator | 2026-04-07 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:37.376119 | orchestrator | 2026-04-07 00:55:37 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:37.377388 | orchestrator | 2026-04-07 00:55:37 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:37.377477 | orchestrator | 2026-04-07 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:40.424439 | orchestrator | 2026-04-07 00:55:40 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:40.426061 | orchestrator | 2026-04-07 00:55:40 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:40.426117 | orchestrator | 2026-04-07 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:43.468469 | orchestrator | 2026-04-07 00:55:43 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:43.469883 | orchestrator | 2026-04-07 00:55:43 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:43.469928 | orchestrator | 2026-04-07 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:46.511838 | orchestrator | 2026-04-07 00:55:46 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:46.511915 | orchestrator | 2026-04-07 00:55:46 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:46.511925 | orchestrator | 2026-04-07 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:49.563135 | orchestrator | 2026-04-07 00:55:49 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:49.565411 | orchestrator | 2026-04-07 00:55:49 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:49.565470 | orchestrator | 2026-04-07 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:52.600328 | orchestrator | 2026-04-07 00:55:52 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:52.601331 | orchestrator | 2026-04-07 00:55:52 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:52.601366 | orchestrator | 2026-04-07 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:55.644290 | orchestrator | 2026-04-07 00:55:55 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:55.644364 | orchestrator | 2026-04-07 00:55:55 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:55.644371 | orchestrator | 2026-04-07 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:55:58.679094 | orchestrator | 2026-04-07 00:55:58 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:55:58.679245 | orchestrator | 2026-04-07 00:55:58 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:55:58.679262 | orchestrator | 2026-04-07 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:01.708246 | orchestrator | 2026-04-07 00:56:01 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:01.708323 | orchestrator | 2026-04-07 00:56:01 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:01.708330 | orchestrator | 2026-04-07 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:04.753444 | orchestrator | 2026-04-07 00:56:04 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:04.756267 | orchestrator | 2026-04-07 00:56:04 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:04.756322 | orchestrator | 2026-04-07 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:07.797877 | orchestrator | 2026-04-07 00:56:07 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:07.800150 | orchestrator | 2026-04-07 00:56:07 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:07.800241 | orchestrator | 2026-04-07 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:10.830626 | orchestrator | 2026-04-07 00:56:10 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:10.832594 | orchestrator | 2026-04-07 00:56:10 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:10.832650 | orchestrator | 2026-04-07 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:13.867502 | orchestrator | 2026-04-07 00:56:13 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:13.868855 | orchestrator | 2026-04-07 00:56:13 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:13.868949 | orchestrator | 2026-04-07 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:16.898500 | orchestrator | 2026-04-07 00:56:16 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:16.898557 | orchestrator | 2026-04-07 00:56:16 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:16.898565 | orchestrator | 2026-04-07 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:19.944619 | orchestrator | 2026-04-07 00:56:19 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:19.946097 | orchestrator | 2026-04-07 00:56:19 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:19.946467 | orchestrator | 2026-04-07 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:22.990470 | orchestrator | 2026-04-07 00:56:22 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:22.992296 | orchestrator | 2026-04-07 00:56:22 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:22.992358 | orchestrator | 2026-04-07 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:26.052956 | orchestrator | 2026-04-07 00:56:26 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:26.056530 | orchestrator | 2026-04-07 00:56:26 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:26.056993 | orchestrator | 2026-04-07 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:29.086868 | orchestrator | 2026-04-07 00:56:29 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:29.088665 | orchestrator | 2026-04-07 00:56:29 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:29.088786 | orchestrator | 2026-04-07 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:32.115151 | orchestrator | 2026-04-07 00:56:32 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:32.115201 | orchestrator | 2026-04-07 00:56:32 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:32.115207 | orchestrator | 2026-04-07 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:35.139816 | orchestrator | 2026-04-07 00:56:35 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:35.140392 | orchestrator | 2026-04-07 00:56:35 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:35.140444 | orchestrator | 2026-04-07 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:38.182860 | orchestrator | 2026-04-07 00:56:38 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:38.184433 | orchestrator | 2026-04-07 00:56:38 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:38.184502 | orchestrator | 2026-04-07 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:41.234391 | orchestrator | 2026-04-07 00:56:41 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:41.236736 | orchestrator | 2026-04-07 00:56:41 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:41.236792 | orchestrator | 2026-04-07 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:44.290657 | orchestrator | 2026-04-07 00:56:44 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:44.292059 | orchestrator | 2026-04-07 00:56:44 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:44.292122 | orchestrator | 2026-04-07 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:47.335661 | orchestrator | 2026-04-07 00:56:47 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:47.338361 | orchestrator | 2026-04-07 00:56:47 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:47.338453 | orchestrator | 2026-04-07 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:50.384901 | orchestrator | 2026-04-07 00:56:50 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:50.386441 | orchestrator | 2026-04-07 00:56:50 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:50.386489 | orchestrator | 2026-04-07 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:53.437722 | orchestrator | 2026-04-07 00:56:53 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:53.439588 | orchestrator | 2026-04-07 00:56:53 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:53.439681 | orchestrator | 2026-04-07 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:56.489204 | orchestrator | 2026-04-07 00:56:56 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:56.492520 | orchestrator | 2026-04-07 00:56:56 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:56.492684 | orchestrator | 2026-04-07 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:56:59.524632 | orchestrator | 2026-04-07 00:56:59 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:56:59.525799 | orchestrator | 2026-04-07 00:56:59 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:56:59.525858 | orchestrator | 2026-04-07 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:02.582415 | orchestrator | 2026-04-07 00:57:02 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:02.583653 | orchestrator | 2026-04-07 00:57:02 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:02.583694 | orchestrator | 2026-04-07 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:05.636092 | orchestrator | 2026-04-07 00:57:05 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:05.638399 | orchestrator | 2026-04-07 00:57:05 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:05.638477 | orchestrator | 2026-04-07 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:08.684231 | orchestrator | 2026-04-07 00:57:08 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:08.686248 | orchestrator | 2026-04-07 00:57:08 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:08.686311 | orchestrator | 2026-04-07 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:11.736022 | orchestrator | 2026-04-07 00:57:11 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:11.737810 | orchestrator | 2026-04-07 00:57:11 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:11.737855 | orchestrator | 2026-04-07 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:14.790517 | orchestrator | 2026-04-07 00:57:14 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:14.791426 | orchestrator | 2026-04-07 00:57:14 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:14.791489 | orchestrator | 2026-04-07 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:17.846592 | orchestrator | 2026-04-07 00:57:17 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:17.849445 | orchestrator | 2026-04-07 00:57:17 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:17.849527 | orchestrator | 2026-04-07 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:20.896213 | orchestrator | 2026-04-07 00:57:20 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:20.896268 | orchestrator | 2026-04-07 00:57:20 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:20.896277 | orchestrator | 2026-04-07 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:23.938583 | orchestrator | 2026-04-07 00:57:23 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:23.938953 | orchestrator | 2026-04-07 00:57:23 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:23.939163 | orchestrator | 2026-04-07 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:26.986885 | orchestrator | 2026-04-07 00:57:26 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:26.989306 | orchestrator | 2026-04-07 00:57:26 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:26.989351 | orchestrator | 2026-04-07 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:30.035663 | orchestrator | 2026-04-07 00:57:30 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:30.036203 | orchestrator | 2026-04-07 00:57:30 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:30.038814 | orchestrator | 2026-04-07 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:33.088268 | orchestrator | 2026-04-07 00:57:33 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:33.092848 | orchestrator | 2026-04-07 00:57:33 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:33.092935 | orchestrator | 2026-04-07 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:36.139482 | orchestrator | 2026-04-07 00:57:36 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:36.141475 | orchestrator | 2026-04-07 00:57:36 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:36.141588 | orchestrator | 2026-04-07 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:39.185468 | orchestrator | 2026-04-07 00:57:39 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:39.186990 | orchestrator | 2026-04-07 00:57:39 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:39.187256 | orchestrator | 2026-04-07 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:42.232884 | orchestrator | 2026-04-07 00:57:42 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:42.237044 | orchestrator | 2026-04-07 00:57:42 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:42.239879 | orchestrator | 2026-04-07 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:45.279277 | orchestrator | 2026-04-07 00:57:45 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:45.282704 | orchestrator | 2026-04-07 00:57:45 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:45.282793 | orchestrator | 2026-04-07 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:48.330399 | orchestrator | 2026-04-07 00:57:48 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:48.335469 | orchestrator | 2026-04-07 00:57:48 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:48.335544 | orchestrator | 2026-04-07 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:51.368111 | orchestrator | 2026-04-07 00:57:51 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:51.368997 | orchestrator | 2026-04-07 00:57:51 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:51.369045 | orchestrator | 2026-04-07 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:54.408078 | orchestrator | 2026-04-07 00:57:54 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:54.410377 | orchestrator | 2026-04-07 00:57:54 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:54.412151 | orchestrator | 2026-04-07 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:57:57.446153 | orchestrator | 2026-04-07 00:57:57 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:57:57.446249 | orchestrator | 2026-04-07 00:57:57 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:57:57.446262 | orchestrator | 2026-04-07 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:00.492344 | orchestrator | 2026-04-07 00:58:00 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:58:00.495364 | orchestrator | 2026-04-07 00:58:00 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:00.496110 | orchestrator | 2026-04-07 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:03.588587 | orchestrator | 2026-04-07 00:58:03 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:58:03.590608 | orchestrator | 2026-04-07 00:58:03 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:03.591389 | orchestrator | 2026-04-07 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:06.630998 | orchestrator | 2026-04-07 00:58:06 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:58:06.632473 | orchestrator | 2026-04-07 00:58:06 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:06.632587 | orchestrator | 2026-04-07 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:09.671937 | orchestrator | 2026-04-07 00:58:09 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:58:09.673798 | orchestrator | 2026-04-07 00:58:09 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:09.673853 | orchestrator | 2026-04-07 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:12.716499 | orchestrator | 2026-04-07 00:58:12 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state STARTED 2026-04-07 00:58:12.717444 | orchestrator | 2026-04-07 00:58:12 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:12.717474 | orchestrator | 2026-04-07 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:15.758301 | orchestrator | 2026-04-07 00:58:15 | INFO  | Task fb63ecc7-ad64-4e9e-8f2b-197153bf638c is in state SUCCESS 2026-04-07 00:58:15.759376 | orchestrator | 2026-04-07 00:58:15.759408 | orchestrator | 2026-04-07 00:58:15.759415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 00:58:15.759421 | orchestrator | 2026-04-07 00:58:15.759426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 00:58:15.759431 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.381) 0:00:00.381 ********* 2026-04-07 00:58:15.759437 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.759443 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.759447 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.759452 | orchestrator | 2026-04-07 00:58:15.759457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 00:58:15.759462 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.409) 0:00:00.790 ********* 2026-04-07 00:58:15.759468 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-07 00:58:15.759476 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-07 00:58:15.759483 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-07 00:58:15.759492 | orchestrator | 2026-04-07 00:58:15.759501 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-07 00:58:15.759507 | orchestrator | 2026-04-07 00:58:15.759513 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 00:58:15.759520 | orchestrator | Tuesday 07 April 2026 00:51:56 +0000 (0:00:00.417) 0:00:01.207 ********* 2026-04-07 00:58:15.759526 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.759533 | orchestrator | 2026-04-07 00:58:15.759539 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-07 00:58:15.759544 | orchestrator | Tuesday 07 April 2026 00:51:57 +0000 (0:00:01.077) 0:00:02.285 ********* 2026-04-07 00:58:15.759551 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.759557 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.759564 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.759571 | orchestrator | 2026-04-07 00:58:15.759577 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-07 00:58:15.759584 | orchestrator | Tuesday 07 April 2026 00:51:59 +0000 (0:00:01.203) 0:00:03.488 ********* 2026-04-07 00:58:15.759590 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.759594 | orchestrator | 2026-04-07 00:58:15.759599 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-07 00:58:15.759603 | orchestrator | Tuesday 07 April 2026 00:52:00 +0000 (0:00:01.050) 0:00:04.539 ********* 2026-04-07 00:58:15.759607 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.759611 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.759633 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.759637 | orchestrator | 2026-04-07 00:58:15.759640 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-07 00:58:15.759644 | orchestrator | Tuesday 07 April 2026 00:52:01 +0000 (0:00:00.813) 0:00:05.352 ********* 2026-04-07 00:58:15.759648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759653 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759664 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 00:58:15.759669 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 00:58:15.759673 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 00:58:15.759678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 00:58:15.759685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 00:58:15.759689 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 00:58:15.759693 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 00:58:15.759696 | orchestrator | 2026-04-07 00:58:15.759709 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 00:58:15.759713 | orchestrator | Tuesday 07 April 2026 00:52:06 +0000 (0:00:05.039) 0:00:10.392 ********* 2026-04-07 00:58:15.759717 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-07 00:58:15.759721 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-07 00:58:15.759725 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-07 00:58:15.759740 | orchestrator | 2026-04-07 00:58:15.759744 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 00:58:15.759748 | orchestrator | Tuesday 07 April 2026 00:52:06 +0000 (0:00:00.863) 0:00:11.255 ********* 2026-04-07 00:58:15.759752 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-07 00:58:15.759756 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-07 00:58:15.759759 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-07 00:58:15.759763 | orchestrator | 2026-04-07 00:58:15.759767 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 00:58:15.759771 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:01.428) 0:00:12.683 ********* 2026-04-07 00:58:15.759774 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-07 00:58:15.759778 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.759790 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-07 00:58:15.759794 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.759798 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-07 00:58:15.759802 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.759805 | orchestrator | 2026-04-07 00:58:15.759809 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-07 00:58:15.759813 | orchestrator | Tuesday 07 April 2026 00:52:08 +0000 (0:00:00.573) 0:00:13.257 ********* 2026-04-07 00:58:15.759819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.759859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.759867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.759871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.759875 | orchestrator | 2026-04-07 00:58:15.759879 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-07 00:58:15.759883 | orchestrator | Tuesday 07 April 2026 00:52:10 +0000 (0:00:02.001) 0:00:15.259 ********* 2026-04-07 00:58:15.759887 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.759891 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.759894 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.759898 | orchestrator | 2026-04-07 00:58:15.759902 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-07 00:58:15.759906 | orchestrator | Tuesday 07 April 2026 00:52:11 +0000 (0:00:01.012) 0:00:16.272 ********* 2026-04-07 00:58:15.759910 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-07 00:58:15.759913 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-07 00:58:15.759917 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-07 00:58:15.759921 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-07 00:58:15.759924 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-07 00:58:15.759928 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-07 00:58:15.759932 | orchestrator | 2026-04-07 00:58:15.759936 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-07 00:58:15.759939 | orchestrator | Tuesday 07 April 2026 00:52:15 +0000 (0:00:03.130) 0:00:19.403 ********* 2026-04-07 00:58:15.759943 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.759947 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.759951 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.759954 | orchestrator | 2026-04-07 00:58:15.759958 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-07 00:58:15.759962 | orchestrator | Tuesday 07 April 2026 00:52:16 +0000 (0:00:01.652) 0:00:21.055 ********* 2026-04-07 00:58:15.759966 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.759974 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.759977 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.759984 | orchestrator | 2026-04-07 00:58:15.759988 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-07 00:58:15.759991 | orchestrator | Tuesday 07 April 2026 00:52:18 +0000 (0:00:01.677) 0:00:22.733 ********* 2026-04-07 00:58:15.759998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760030 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.760037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760048 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.760057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760073 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.760077 | orchestrator | 2026-04-07 00:58:15.760081 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-07 00:58:15.760085 | orchestrator | Tuesday 07 April 2026 00:52:19 +0000 (0:00:00.938) 0:00:23.671 ********* 2026-04-07 00:58:15.760089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9', '__omit_place_holder__c76b25a9c413606cf88498154aba64bf551fdaa9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 00:58:15.760176 | orchestrator | 2026-04-07 00:58:15.760179 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-07 00:58:15.760183 | orchestrator | Tuesday 07 April 2026 00:52:25 +0000 (0:00:05.995) 0:00:29.667 ********* 2026-04-07 00:58:15.760187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760240 | orchestrator | 2026-04-07 00:58:15.760245 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-07 00:58:15.760249 | orchestrator | Tuesday 07 April 2026 00:52:28 +0000 (0:00:03.402) 0:00:33.070 ********* 2026-04-07 00:58:15.760253 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 00:58:15.760260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 00:58:15.760264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 00:58:15.760268 | orchestrator | 2026-04-07 00:58:15.760272 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-07 00:58:15.760275 | orchestrator | Tuesday 07 April 2026 00:52:30 +0000 (0:00:01.742) 0:00:34.812 ********* 2026-04-07 00:58:15.760279 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 00:58:15.760283 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 00:58:15.760287 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 00:58:15.760291 | orchestrator | 2026-04-07 00:58:15.760532 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-07 00:58:15.760541 | orchestrator | Tuesday 07 April 2026 00:52:35 +0000 (0:00:04.739) 0:00:39.551 ********* 2026-04-07 00:58:15.760549 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.760552 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.760556 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.760560 | orchestrator | 2026-04-07 00:58:15.760564 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-07 00:58:15.760568 | orchestrator | Tuesday 07 April 2026 00:52:36 +0000 (0:00:00.962) 0:00:40.513 ********* 2026-04-07 00:58:15.760572 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 00:58:15.760576 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 00:58:15.760580 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 00:58:15.760584 | orchestrator | 2026-04-07 00:58:15.760588 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-07 00:58:15.760592 | orchestrator | Tuesday 07 April 2026 00:52:38 +0000 (0:00:02.406) 0:00:42.920 ********* 2026-04-07 00:58:15.760595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 00:58:15.760599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 00:58:15.760603 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 00:58:15.760607 | orchestrator | 2026-04-07 00:58:15.760611 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-07 00:58:15.760619 | orchestrator | Tuesday 07 April 2026 00:52:40 +0000 (0:00:01.798) 0:00:44.719 ********* 2026-04-07 00:58:15.760623 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-07 00:58:15.760627 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-07 00:58:15.760631 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-07 00:58:15.760635 | orchestrator | 2026-04-07 00:58:15.760639 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-07 00:58:15.760643 | orchestrator | Tuesday 07 April 2026 00:52:42 +0000 (0:00:01.752) 0:00:46.471 ********* 2026-04-07 00:58:15.760649 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-07 00:58:15.760655 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-07 00:58:15.760661 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-07 00:58:15.760666 | orchestrator | 2026-04-07 00:58:15.760672 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 00:58:15.760678 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:01.905) 0:00:48.376 ********* 2026-04-07 00:58:15.760684 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.760689 | orchestrator | 2026-04-07 00:58:15.760695 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-07 00:58:15.760701 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:00.640) 0:00:49.017 ********* 2026-04-07 00:58:15.760707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.760760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.760783 | orchestrator | 2026-04-07 00:58:15.760789 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-07 00:58:15.760795 | orchestrator | Tuesday 07 April 2026 00:52:48 +0000 (0:00:03.570) 0:00:52.588 ********* 2026-04-07 00:58:15.760806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760837 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.760842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760871 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.760877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760905 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.760910 | orchestrator | 2026-04-07 00:58:15.760916 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-07 00:58:15.760922 | orchestrator | Tuesday 07 April 2026 00:52:48 +0000 (0:00:00.630) 0:00:53.218 ********* 2026-04-07 00:58:15.760929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.760953 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.760962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.760992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.760999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761005 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761018 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761024 | orchestrator | 2026-04-07 00:58:15.761030 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 00:58:15.761037 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:00.936) 0:00:54.155 ********* 2026-04-07 00:58:15.761046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761075 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761091 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761119 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761123 | orchestrator | 2026-04-07 00:58:15.761127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 00:58:15.761131 | orchestrator | Tuesday 07 April 2026 00:52:50 +0000 (0:00:00.649) 0:00:54.805 ********* 2026-04-07 00:58:15.761135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761147 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761222 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761244 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761248 | orchestrator | 2026-04-07 00:58:15.761253 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 00:58:15.761258 | orchestrator | Tuesday 07 April 2026 00:52:51 +0000 (0:00:00.700) 0:00:55.506 ********* 2026-04-07 00:58:15.761262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761280 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761319 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761341 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761345 | orchestrator | 2026-04-07 00:58:15.761352 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-07 00:58:15.761357 | orchestrator | Tuesday 07 April 2026 00:52:52 +0000 (0:00:01.013) 0:00:56.519 ********* 2026-04-07 00:58:15.761361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761379 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761400 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761436 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761441 | orchestrator | 2026-04-07 00:58:15.761445 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-07 00:58:15.761450 | orchestrator | Tuesday 07 April 2026 00:52:52 +0000 (0:00:00.806) 0:00:57.325 ********* 2026-04-07 00:58:15.761455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761472 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761497 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761516 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761520 | orchestrator | 2026-04-07 00:58:15.761524 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-07 00:58:15.761528 | orchestrator | Tuesday 07 April 2026 00:52:53 +0000 (0:00:00.671) 0:00:57.997 ********* 2026-04-07 00:58:15.761532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761547 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761576 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 00:58:15.761584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 00:58:15.761590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 00:58:15.761594 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761598 | orchestrator | 2026-04-07 00:58:15.761602 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-07 00:58:15.761605 | orchestrator | Tuesday 07 April 2026 00:52:54 +0000 (0:00:01.187) 0:00:59.185 ********* 2026-04-07 00:58:15.761610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 00:58:15.761614 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 00:58:15.761621 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 00:58:15.761628 | orchestrator | 2026-04-07 00:58:15.761634 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-07 00:58:15.761640 | orchestrator | Tuesday 07 April 2026 00:52:56 +0000 (0:00:02.061) 0:01:01.246 ********* 2026-04-07 00:58:15.761646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 00:58:15.761651 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 00:58:15.761657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 00:58:15.761663 | orchestrator | 2026-04-07 00:58:15.761669 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-07 00:58:15.761675 | orchestrator | Tuesday 07 April 2026 00:52:58 +0000 (0:00:02.027) 0:01:03.274 ********* 2026-04-07 00:58:15.761681 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 00:58:15.761694 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 00:58:15.761700 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 00:58:15.761707 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761713 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 00:58:15.761716 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 00:58:15.761720 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761724 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 00:58:15.761728 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761731 | orchestrator | 2026-04-07 00:58:15.761735 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-07 00:58:15.761739 | orchestrator | Tuesday 07 April 2026 00:53:00 +0000 (0:00:01.704) 0:01:04.978 ********* 2026-04-07 00:58:15.761743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.761784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 00:58:15.761792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.761796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 00:58:15.761800 | orchestrator | 2026-04-07 00:58:15.761804 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-07 00:58:15.761808 | orchestrator | Tuesday 07 April 2026 00:53:03 +0000 (0:00:03.244) 0:01:08.223 ********* 2026-04-07 00:58:15.761812 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.761815 | orchestrator | 2026-04-07 00:58:15.761819 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-07 00:58:15.761825 | orchestrator | Tuesday 07 April 2026 00:53:04 +0000 (0:00:00.722) 0:01:08.945 ********* 2026-04-07 00:58:15.761830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 00:58:15.761841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 00:58:15.761857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 00:58:15.761883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761895 | orchestrator | 2026-04-07 00:58:15.761899 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-07 00:58:15.761902 | orchestrator | Tuesday 07 April 2026 00:53:09 +0000 (0:00:04.859) 0:01:13.804 ********* 2026-04-07 00:58:15.761909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 00:58:15.761920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761932 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.761936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 00:58:15.761940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.761965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 00:58:15.761969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.761973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.761981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.761985 | orchestrator | 2026-04-07 00:58:15.761989 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-07 00:58:15.761993 | orchestrator | Tuesday 07 April 2026 00:53:10 +0000 (0:00:00.617) 0:01:14.422 ********* 2026-04-07 00:58:15.761997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762053 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762068 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 00:58:15.762079 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762083 | orchestrator | 2026-04-07 00:58:15.762090 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-07 00:58:15.762094 | orchestrator | Tuesday 07 April 2026 00:53:11 +0000 (0:00:01.024) 0:01:15.447 ********* 2026-04-07 00:58:15.762098 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762102 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762105 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762109 | orchestrator | 2026-04-07 00:58:15.762113 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-07 00:58:15.762117 | orchestrator | Tuesday 07 April 2026 00:53:12 +0000 (0:00:01.438) 0:01:16.886 ********* 2026-04-07 00:58:15.762120 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762124 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762128 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762132 | orchestrator | 2026-04-07 00:58:15.762135 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-07 00:58:15.762139 | orchestrator | Tuesday 07 April 2026 00:53:14 +0000 (0:00:02.302) 0:01:19.188 ********* 2026-04-07 00:58:15.762143 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.762146 | orchestrator | 2026-04-07 00:58:15.762167 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-07 00:58:15.762174 | orchestrator | Tuesday 07 April 2026 00:53:15 +0000 (0:00:00.559) 0:01:19.748 ********* 2026-04-07 00:58:15.762180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762236 | orchestrator | 2026-04-07 00:58:15.762240 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-07 00:58:15.762244 | orchestrator | Tuesday 07 April 2026 00:53:18 +0000 (0:00:03.416) 0:01:23.164 ********* 2026-04-07 00:58:15.762250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762267 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762288 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762310 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762314 | orchestrator | 2026-04-07 00:58:15.762317 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-07 00:58:15.762321 | orchestrator | Tuesday 07 April 2026 00:53:19 +0000 (0:00:00.689) 0:01:23.854 ********* 2026-04-07 00:58:15.762325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762334 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762346 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762360 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762363 | orchestrator | 2026-04-07 00:58:15.762367 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-07 00:58:15.762371 | orchestrator | Tuesday 07 April 2026 00:53:20 +0000 (0:00:00.702) 0:01:24.556 ********* 2026-04-07 00:58:15.762375 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762378 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762382 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762386 | orchestrator | 2026-04-07 00:58:15.762390 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-07 00:58:15.762393 | orchestrator | Tuesday 07 April 2026 00:53:21 +0000 (0:00:01.289) 0:01:25.845 ********* 2026-04-07 00:58:15.762397 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762401 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762405 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762408 | orchestrator | 2026-04-07 00:58:15.762420 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-07 00:58:15.762424 | orchestrator | Tuesday 07 April 2026 00:53:23 +0000 (0:00:01.894) 0:01:27.740 ********* 2026-04-07 00:58:15.762428 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762432 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762437 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762448 | orchestrator | 2026-04-07 00:58:15.762454 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-07 00:58:15.762459 | orchestrator | Tuesday 07 April 2026 00:53:23 +0000 (0:00:00.257) 0:01:27.997 ********* 2026-04-07 00:58:15.762465 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.762471 | orchestrator | 2026-04-07 00:58:15.762476 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-07 00:58:15.762482 | orchestrator | Tuesday 07 April 2026 00:53:24 +0000 (0:00:00.719) 0:01:28.716 ********* 2026-04-07 00:58:15.762495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 00:58:15.762502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 00:58:15.762512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 00:58:15.762519 | orchestrator | 2026-04-07 00:58:15.762525 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-07 00:58:15.762531 | orchestrator | Tuesday 07 April 2026 00:53:27 +0000 (0:00:02.665) 0:01:31.382 ********* 2026-04-07 00:58:15.762542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 00:58:15.762549 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 00:58:15.762568 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 00:58:15.762576 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762580 | orchestrator | 2026-04-07 00:58:15.762583 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-07 00:58:15.762587 | orchestrator | Tuesday 07 April 2026 00:53:28 +0000 (0:00:01.890) 0:01:33.273 ********* 2026-04-07 00:58:15.762592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762603 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762617 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 00:58:15.762634 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762638 | orchestrator | 2026-04-07 00:58:15.762642 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-07 00:58:15.762646 | orchestrator | Tuesday 07 April 2026 00:53:30 +0000 (0:00:02.071) 0:01:35.344 ********* 2026-04-07 00:58:15.762650 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762653 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762657 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762661 | orchestrator | 2026-04-07 00:58:15.762664 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-07 00:58:15.762668 | orchestrator | Tuesday 07 April 2026 00:53:31 +0000 (0:00:00.424) 0:01:35.769 ********* 2026-04-07 00:58:15.762672 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762676 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762679 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762683 | orchestrator | 2026-04-07 00:58:15.762687 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-07 00:58:15.762691 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:01.051) 0:01:36.821 ********* 2026-04-07 00:58:15.762694 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.762698 | orchestrator | 2026-04-07 00:58:15.762702 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-07 00:58:15.762706 | orchestrator | Tuesday 07 April 2026 00:53:33 +0000 (0:00:01.078) 0:01:37.899 ********* 2026-04-07 00:58:15.762710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.762750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762785 | orchestrator | 2026-04-07 00:58:15.762789 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-07 00:58:15.762793 | orchestrator | Tuesday 07 April 2026 00:53:38 +0000 (0:00:04.718) 0:01:42.618 ********* 2026-04-07 00:58:15.762809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762836 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762862 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.762873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.762888 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762892 | orchestrator | 2026-04-07 00:58:15.762895 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-07 00:58:15.762899 | orchestrator | Tuesday 07 April 2026 00:53:39 +0000 (0:00:01.357) 0:01:43.976 ********* 2026-04-07 00:58:15.762906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762918 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.762922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762930 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.762936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 00:58:15.762944 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.762948 | orchestrator | 2026-04-07 00:58:15.762952 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-07 00:58:15.762956 | orchestrator | Tuesday 07 April 2026 00:53:40 +0000 (0:00:01.350) 0:01:45.326 ********* 2026-04-07 00:58:15.762959 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762963 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762967 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762971 | orchestrator | 2026-04-07 00:58:15.762975 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-07 00:58:15.762979 | orchestrator | Tuesday 07 April 2026 00:53:42 +0000 (0:00:01.984) 0:01:47.311 ********* 2026-04-07 00:58:15.762982 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.762986 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.762990 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.762994 | orchestrator | 2026-04-07 00:58:15.762997 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-07 00:58:15.763001 | orchestrator | Tuesday 07 April 2026 00:53:44 +0000 (0:00:02.025) 0:01:49.336 ********* 2026-04-07 00:58:15.763005 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.763008 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763012 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.763016 | orchestrator | 2026-04-07 00:58:15.763020 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-07 00:58:15.763024 | orchestrator | Tuesday 07 April 2026 00:53:45 +0000 (0:00:00.290) 0:01:49.626 ********* 2026-04-07 00:58:15.763028 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.763031 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763035 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.763039 | orchestrator | 2026-04-07 00:58:15.763049 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-07 00:58:15.763063 | orchestrator | Tuesday 07 April 2026 00:53:45 +0000 (0:00:00.304) 0:01:49.931 ********* 2026-04-07 00:58:15.763071 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.763077 | orchestrator | 2026-04-07 00:58:15.763085 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-07 00:58:15.763090 | orchestrator | Tuesday 07 April 2026 00:53:46 +0000 (0:00:00.863) 0:01:50.794 ********* 2026-04-07 00:58:15.763096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 00:58:15.763108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 00:58:15.763302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 00:58:15.763347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763375 | orchestrator | 2026-04-07 00:58:15.763379 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-07 00:58:15.763385 | orchestrator | Tuesday 07 April 2026 00:53:50 +0000 (0:00:04.155) 0:01:54.950 ********* 2026-04-07 00:58:15.763389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 00:58:15.763396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763428 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.763436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 00:58:15.763443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 00:58:15.763461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 00:58:15.763476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763488 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.763492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.763534 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763538 | orchestrator | 2026-04-07 00:58:15.763542 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-07 00:58:15.763546 | orchestrator | Tuesday 07 April 2026 00:53:51 +0000 (0:00:00.781) 0:01:55.732 ********* 2026-04-07 00:58:15.763550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763559 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.763563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763571 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 00:58:15.763582 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.763586 | orchestrator | 2026-04-07 00:58:15.763590 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-07 00:58:15.763594 | orchestrator | Tuesday 07 April 2026 00:53:52 +0000 (0:00:01.212) 0:01:56.944 ********* 2026-04-07 00:58:15.763597 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.763601 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.763605 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.763609 | orchestrator | 2026-04-07 00:58:15.763612 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-07 00:58:15.763616 | orchestrator | Tuesday 07 April 2026 00:53:54 +0000 (0:00:01.408) 0:01:58.353 ********* 2026-04-07 00:58:15.763620 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.763624 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.763628 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.763637 | orchestrator | 2026-04-07 00:58:15.763641 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-07 00:58:15.763645 | orchestrator | Tuesday 07 April 2026 00:53:56 +0000 (0:00:02.191) 0:02:00.544 ********* 2026-04-07 00:58:15.763648 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.763652 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763656 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.763660 | orchestrator | 2026-04-07 00:58:15.763664 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-07 00:58:15.763671 | orchestrator | Tuesday 07 April 2026 00:53:56 +0000 (0:00:00.372) 0:02:00.917 ********* 2026-04-07 00:58:15.763677 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.763682 | orchestrator | 2026-04-07 00:58:15.763688 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-07 00:58:15.763700 | orchestrator | Tuesday 07 April 2026 00:53:58 +0000 (0:00:01.582) 0:02:02.499 ********* 2026-04-07 00:58:15.763716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 00:58:15.763725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.763740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 00:58:15.763749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.763756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 00:58:15.763781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.763788 | orchestrator | 2026-04-07 00:58:15.763794 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-07 00:58:15.763800 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:06.722) 0:02:09.221 ********* 2026-04-07 00:58:15.763809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 00:58:15.763821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.763826 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.763830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 00:58:15.764044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.764054 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 00:58:15.764068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.764077 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764081 | orchestrator | 2026-04-07 00:58:15.764085 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-07 00:58:15.764089 | orchestrator | Tuesday 07 April 2026 00:54:07 +0000 (0:00:02.857) 0:02:12.079 ********* 2026-04-07 00:58:15.764093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764102 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764117 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 00:58:15.764131 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764135 | orchestrator | 2026-04-07 00:58:15.764139 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-07 00:58:15.764143 | orchestrator | Tuesday 07 April 2026 00:54:11 +0000 (0:00:03.357) 0:02:15.436 ********* 2026-04-07 00:58:15.764146 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764216 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764221 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764225 | orchestrator | 2026-04-07 00:58:15.764229 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-07 00:58:15.764233 | orchestrator | Tuesday 07 April 2026 00:54:12 +0000 (0:00:01.353) 0:02:16.790 ********* 2026-04-07 00:58:15.764236 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764240 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764247 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764251 | orchestrator | 2026-04-07 00:58:15.764254 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-07 00:58:15.764258 | orchestrator | Tuesday 07 April 2026 00:54:14 +0000 (0:00:02.005) 0:02:18.796 ********* 2026-04-07 00:58:15.764262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764266 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764269 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764273 | orchestrator | 2026-04-07 00:58:15.764277 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-07 00:58:15.764281 | orchestrator | Tuesday 07 April 2026 00:54:14 +0000 (0:00:00.298) 0:02:19.094 ********* 2026-04-07 00:58:15.764285 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.764289 | orchestrator | 2026-04-07 00:58:15.764292 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-07 00:58:15.764296 | orchestrator | Tuesday 07 April 2026 00:54:15 +0000 (0:00:01.064) 0:02:20.159 ********* 2026-04-07 00:58:15.764300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 00:58:15.764309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 00:58:15.764313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 00:58:15.764317 | orchestrator | 2026-04-07 00:58:15.764321 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-07 00:58:15.764325 | orchestrator | Tuesday 07 April 2026 00:54:18 +0000 (0:00:03.058) 0:02:23.217 ********* 2026-04-07 00:58:15.764331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 00:58:15.764337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 00:58:15.764342 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764345 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 00:58:15.764353 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764357 | orchestrator | 2026-04-07 00:58:15.764361 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-07 00:58:15.764368 | orchestrator | Tuesday 07 April 2026 00:54:19 +0000 (0:00:00.388) 0:02:23.606 ********* 2026-04-07 00:58:15.764372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764381 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764392 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 00:58:15.764404 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764408 | orchestrator | 2026-04-07 00:58:15.764411 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-07 00:58:15.764415 | orchestrator | Tuesday 07 April 2026 00:54:20 +0000 (0:00:00.795) 0:02:24.401 ********* 2026-04-07 00:58:15.764419 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764423 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764427 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764430 | orchestrator | 2026-04-07 00:58:15.764434 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-07 00:58:15.764438 | orchestrator | Tuesday 07 April 2026 00:54:21 +0000 (0:00:01.414) 0:02:25.815 ********* 2026-04-07 00:58:15.764442 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764446 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764449 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764453 | orchestrator | 2026-04-07 00:58:15.764457 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-07 00:58:15.764461 | orchestrator | Tuesday 07 April 2026 00:54:23 +0000 (0:00:02.102) 0:02:27.918 ********* 2026-04-07 00:58:15.764465 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764471 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764475 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764479 | orchestrator | 2026-04-07 00:58:15.764482 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-07 00:58:15.764486 | orchestrator | Tuesday 07 April 2026 00:54:23 +0000 (0:00:00.317) 0:02:28.236 ********* 2026-04-07 00:58:15.764490 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.764494 | orchestrator | 2026-04-07 00:58:15.764497 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-07 00:58:15.764501 | orchestrator | Tuesday 07 April 2026 00:54:24 +0000 (0:00:01.103) 0:02:29.339 ********* 2026-04-07 00:58:15.764510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 00:58:15.764527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 00:58:15.764536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 00:58:15.764545 | orchestrator | 2026-04-07 00:58:15.764549 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-07 00:58:15.764553 | orchestrator | Tuesday 07 April 2026 00:54:28 +0000 (0:00:03.366) 0:02:32.706 ********* 2026-04-07 00:58:15.764562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 00:58:15.764570 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 00:58:15.764579 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 00:58:15.764593 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764597 | orchestrator | 2026-04-07 00:58:15.764601 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-07 00:58:15.764605 | orchestrator | Tuesday 07 April 2026 00:54:29 +0000 (0:00:00.661) 0:02:33.368 ********* 2026-04-07 00:58:15.764609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 00:58:15.764632 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 00:58:15.764685 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 00:58:15.764707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 00:58:15.764711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 00:58:15.764716 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764720 | orchestrator | 2026-04-07 00:58:15.764724 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-07 00:58:15.764729 | orchestrator | Tuesday 07 April 2026 00:54:29 +0000 (0:00:00.955) 0:02:34.323 ********* 2026-04-07 00:58:15.764733 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764738 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764742 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764746 | orchestrator | 2026-04-07 00:58:15.764751 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-07 00:58:15.764756 | orchestrator | Tuesday 07 April 2026 00:54:31 +0000 (0:00:01.678) 0:02:36.001 ********* 2026-04-07 00:58:15.764760 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.764764 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.764769 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.764773 | orchestrator | 2026-04-07 00:58:15.764783 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-07 00:58:15.764788 | orchestrator | Tuesday 07 April 2026 00:54:33 +0000 (0:00:02.110) 0:02:38.112 ********* 2026-04-07 00:58:15.764792 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764805 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764809 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764814 | orchestrator | 2026-04-07 00:58:15.764818 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-07 00:58:15.764823 | orchestrator | Tuesday 07 April 2026 00:54:34 +0000 (0:00:00.294) 0:02:38.406 ********* 2026-04-07 00:58:15.764828 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764832 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764836 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.764841 | orchestrator | 2026-04-07 00:58:15.764845 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-07 00:58:15.764853 | orchestrator | Tuesday 07 April 2026 00:54:34 +0000 (0:00:00.279) 0:02:38.686 ********* 2026-04-07 00:58:15.764858 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.764862 | orchestrator | 2026-04-07 00:58:15.764866 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-07 00:58:15.764871 | orchestrator | Tuesday 07 April 2026 00:54:35 +0000 (0:00:01.119) 0:02:39.806 ********* 2026-04-07 00:58:15.764878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 00:58:15.764887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.764893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.764899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 00:58:15.764904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.764912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.764919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 00:58:15.764925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.764930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.764934 | orchestrator | 2026-04-07 00:58:15.764938 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-07 00:58:15.764941 | orchestrator | Tuesday 07 April 2026 00:54:39 +0000 (0:00:03.566) 0:02:43.372 ********* 2026-04-07 00:58:15.764946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 00:58:15.764953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.764959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.764963 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.764972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 00:58:15.764977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.764981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.764988 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.764992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 00:58:15.764999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 00:58:15.765004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 00:58:15.765008 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765011 | orchestrator | 2026-04-07 00:58:15.765017 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-07 00:58:15.765021 | orchestrator | Tuesday 07 April 2026 00:54:39 +0000 (0:00:00.626) 0:02:43.998 ********* 2026-04-07 00:58:15.765027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765036 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765053 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 00:58:15.765065 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765068 | orchestrator | 2026-04-07 00:58:15.765072 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-07 00:58:15.765079 | orchestrator | Tuesday 07 April 2026 00:54:40 +0000 (0:00:01.061) 0:02:45.060 ********* 2026-04-07 00:58:15.765083 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765087 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765091 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765094 | orchestrator | 2026-04-07 00:58:15.765098 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-07 00:58:15.765102 | orchestrator | Tuesday 07 April 2026 00:54:42 +0000 (0:00:01.633) 0:02:46.694 ********* 2026-04-07 00:58:15.765106 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765110 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765114 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765117 | orchestrator | 2026-04-07 00:58:15.765121 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-07 00:58:15.765125 | orchestrator | Tuesday 07 April 2026 00:54:45 +0000 (0:00:02.664) 0:02:49.359 ********* 2026-04-07 00:58:15.765129 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765132 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765136 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765140 | orchestrator | 2026-04-07 00:58:15.765144 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-07 00:58:15.765148 | orchestrator | Tuesday 07 April 2026 00:54:45 +0000 (0:00:00.410) 0:02:49.770 ********* 2026-04-07 00:58:15.765165 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.765169 | orchestrator | 2026-04-07 00:58:15.765173 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-07 00:58:15.765177 | orchestrator | Tuesday 07 April 2026 00:54:46 +0000 (0:00:01.344) 0:02:51.114 ********* 2026-04-07 00:58:15.765187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 00:58:15.765195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 00:58:15.765210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 00:58:15.765220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765224 | orchestrator | 2026-04-07 00:58:15.765228 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-07 00:58:15.765232 | orchestrator | Tuesday 07 April 2026 00:54:50 +0000 (0:00:03.886) 0:02:55.001 ********* 2026-04-07 00:58:15.765238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 00:58:15.765247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765251 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 00:58:15.765262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765266 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 00:58:15.765280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765290 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765294 | orchestrator | 2026-04-07 00:58:15.765298 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-07 00:58:15.765302 | orchestrator | Tuesday 07 April 2026 00:54:51 +0000 (0:00:00.675) 0:02:55.676 ********* 2026-04-07 00:58:15.765306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765314 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 00:58:15.765337 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765341 | orchestrator | 2026-04-07 00:58:15.765345 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-07 00:58:15.765349 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:01.164) 0:02:56.840 ********* 2026-04-07 00:58:15.765352 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765356 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765360 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765363 | orchestrator | 2026-04-07 00:58:15.765367 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-07 00:58:15.765371 | orchestrator | Tuesday 07 April 2026 00:54:54 +0000 (0:00:01.526) 0:02:58.367 ********* 2026-04-07 00:58:15.765375 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765378 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765382 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765386 | orchestrator | 2026-04-07 00:58:15.765390 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-07 00:58:15.765394 | orchestrator | Tuesday 07 April 2026 00:54:56 +0000 (0:00:02.241) 0:03:00.609 ********* 2026-04-07 00:58:15.765400 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.765407 | orchestrator | 2026-04-07 00:58:15.765411 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-07 00:58:15.765415 | orchestrator | Tuesday 07 April 2026 00:54:57 +0000 (0:00:01.107) 0:03:01.716 ********* 2026-04-07 00:58:15.765715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 00:58:15.765727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 00:58:15.765747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 00:58:15.765766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765796 | orchestrator | 2026-04-07 00:58:15.765801 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-07 00:58:15.765805 | orchestrator | Tuesday 07 April 2026 00:55:01 +0000 (0:00:04.519) 0:03:06.236 ********* 2026-04-07 00:58:15.765816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 00:58:15.765822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765835 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 00:58:15.765849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765865 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 00:58:15.765873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.765892 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765896 | orchestrator | 2026-04-07 00:58:15.765900 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-07 00:58:15.765904 | orchestrator | Tuesday 07 April 2026 00:55:02 +0000 (0:00:00.712) 0:03:06.948 ********* 2026-04-07 00:58:15.765908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.765925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765938 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.765942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 00:58:15.765950 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.765954 | orchestrator | 2026-04-07 00:58:15.765958 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-07 00:58:15.765962 | orchestrator | Tuesday 07 April 2026 00:55:03 +0000 (0:00:00.881) 0:03:07.829 ********* 2026-04-07 00:58:15.765965 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765969 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765973 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765976 | orchestrator | 2026-04-07 00:58:15.765980 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-07 00:58:15.765984 | orchestrator | Tuesday 07 April 2026 00:55:04 +0000 (0:00:01.317) 0:03:09.147 ********* 2026-04-07 00:58:15.765988 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.765991 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.765995 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.765999 | orchestrator | 2026-04-07 00:58:15.766002 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-07 00:58:15.766006 | orchestrator | Tuesday 07 April 2026 00:55:06 +0000 (0:00:02.187) 0:03:11.334 ********* 2026-04-07 00:58:15.766044 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.766049 | orchestrator | 2026-04-07 00:58:15.766053 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-07 00:58:15.766057 | orchestrator | Tuesday 07 April 2026 00:55:08 +0000 (0:00:01.303) 0:03:12.638 ********* 2026-04-07 00:58:15.766061 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:58:15.766065 | orchestrator | 2026-04-07 00:58:15.766068 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-07 00:58:15.766072 | orchestrator | Tuesday 07 April 2026 00:55:11 +0000 (0:00:03.170) 0:03:15.809 ********* 2026-04-07 00:58:15.766079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766091 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766109 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766147 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766168 | orchestrator | 2026-04-07 00:58:15.766172 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-07 00:58:15.766176 | orchestrator | Tuesday 07 April 2026 00:55:14 +0000 (0:00:03.127) 0:03:18.937 ********* 2026-04-07 00:58:15.766181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766196 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766215 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 00:58:15.766229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 00:58:15.766233 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766236 | orchestrator | 2026-04-07 00:58:15.766240 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-07 00:58:15.766248 | orchestrator | Tuesday 07 April 2026 00:55:17 +0000 (0:00:03.363) 0:03:22.300 ********* 2026-04-07 00:58:15.766252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766260 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766271 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 00:58:15.766292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766296 | orchestrator | 2026-04-07 00:58:15.766305 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-07 00:58:15.766308 | orchestrator | Tuesday 07 April 2026 00:55:20 +0000 (0:00:02.410) 0:03:24.711 ********* 2026-04-07 00:58:15.766312 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.766316 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.766320 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.766323 | orchestrator | 2026-04-07 00:58:15.766327 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-07 00:58:15.766331 | orchestrator | Tuesday 07 April 2026 00:55:22 +0000 (0:00:02.280) 0:03:26.992 ********* 2026-04-07 00:58:15.766335 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766339 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766344 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766348 | orchestrator | 2026-04-07 00:58:15.766352 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-07 00:58:15.766357 | orchestrator | Tuesday 07 April 2026 00:55:24 +0000 (0:00:01.855) 0:03:28.848 ********* 2026-04-07 00:58:15.766361 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766365 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766370 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766374 | orchestrator | 2026-04-07 00:58:15.766379 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-07 00:58:15.766383 | orchestrator | Tuesday 07 April 2026 00:55:24 +0000 (0:00:00.322) 0:03:29.171 ********* 2026-04-07 00:58:15.766388 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.766392 | orchestrator | 2026-04-07 00:58:15.766396 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-07 00:58:15.766401 | orchestrator | Tuesday 07 April 2026 00:55:26 +0000 (0:00:01.385) 0:03:30.556 ********* 2026-04-07 00:58:15.766406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 00:58:15.766412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 00:58:15.766419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 00:58:15.766427 | orchestrator | 2026-04-07 00:58:15.766432 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-07 00:58:15.766436 | orchestrator | Tuesday 07 April 2026 00:55:27 +0000 (0:00:01.587) 0:03:32.144 ********* 2026-04-07 00:58:15.766450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 00:58:15.766454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 00:58:15.766459 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766464 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 00:58:15.766473 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766477 | orchestrator | 2026-04-07 00:58:15.766482 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-07 00:58:15.766490 | orchestrator | Tuesday 07 April 2026 00:55:28 +0000 (0:00:00.437) 0:03:32.582 ********* 2026-04-07 00:58:15.766495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 00:58:15.766500 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 00:58:15.766509 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 00:58:15.766524 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766529 | orchestrator | 2026-04-07 00:58:15.766533 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-07 00:58:15.766538 | orchestrator | Tuesday 07 April 2026 00:55:29 +0000 (0:00:01.039) 0:03:33.621 ********* 2026-04-07 00:58:15.766542 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766546 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766551 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766555 | orchestrator | 2026-04-07 00:58:15.766559 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-07 00:58:15.766564 | orchestrator | Tuesday 07 April 2026 00:55:29 +0000 (0:00:00.409) 0:03:34.030 ********* 2026-04-07 00:58:15.766568 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766573 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766577 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766581 | orchestrator | 2026-04-07 00:58:15.766586 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-07 00:58:15.766590 | orchestrator | Tuesday 07 April 2026 00:55:31 +0000 (0:00:01.374) 0:03:35.405 ********* 2026-04-07 00:58:15.766595 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.766599 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.766604 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.766610 | orchestrator | 2026-04-07 00:58:15.766615 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-07 00:58:15.766619 | orchestrator | Tuesday 07 April 2026 00:55:31 +0000 (0:00:00.312) 0:03:35.717 ********* 2026-04-07 00:58:15.766624 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.766628 | orchestrator | 2026-04-07 00:58:15.766632 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-07 00:58:15.766636 | orchestrator | Tuesday 07 April 2026 00:55:32 +0000 (0:00:01.403) 0:03:37.120 ********* 2026-04-07 00:58:15.766641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 00:58:15.766646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.766675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.766742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 00:58:15.766756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.766778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 00:58:15.766788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.766847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.766879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.766915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.766927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.766932 | orchestrator | 2026-04-07 00:58:15.766935 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-07 00:58:15.766939 | orchestrator | Tuesday 07 April 2026 00:55:36 +0000 (0:00:04.053) 0:03:41.174 ********* 2026-04-07 00:58:15.766943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 00:58:15.766950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 00:58:15.766961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.766990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.766994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.767009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.767088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767096 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.767120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767124 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 00:58:15.767134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 00:58:15.767169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 00:58:15.767211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 00:58:15.767228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 00:58:15.767232 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767236 | orchestrator | 2026-04-07 00:58:15.767240 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-07 00:58:15.767244 | orchestrator | Tuesday 07 April 2026 00:55:38 +0000 (0:00:02.052) 0:03:43.226 ********* 2026-04-07 00:58:15.767248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767256 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767267 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 00:58:15.767279 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767282 | orchestrator | 2026-04-07 00:58:15.767286 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-07 00:58:15.767290 | orchestrator | Tuesday 07 April 2026 00:55:40 +0000 (0:00:01.488) 0:03:44.715 ********* 2026-04-07 00:58:15.767294 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767297 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767301 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767305 | orchestrator | 2026-04-07 00:58:15.767309 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-07 00:58:15.767313 | orchestrator | Tuesday 07 April 2026 00:55:41 +0000 (0:00:01.438) 0:03:46.154 ********* 2026-04-07 00:58:15.767316 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767320 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767324 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767328 | orchestrator | 2026-04-07 00:58:15.767331 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-07 00:58:15.767335 | orchestrator | Tuesday 07 April 2026 00:55:43 +0000 (0:00:02.040) 0:03:48.195 ********* 2026-04-07 00:58:15.767339 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.767343 | orchestrator | 2026-04-07 00:58:15.767350 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-07 00:58:15.767353 | orchestrator | Tuesday 07 April 2026 00:55:45 +0000 (0:00:01.444) 0:03:49.639 ********* 2026-04-07 00:58:15.767360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767375 | orchestrator | 2026-04-07 00:58:15.767378 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-07 00:58:15.767382 | orchestrator | Tuesday 07 April 2026 00:55:48 +0000 (0:00:03.198) 0:03:52.837 ********* 2026-04-07 00:58:15.767386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767393 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767403 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767597 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767601 | orchestrator | 2026-04-07 00:58:15.767604 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-07 00:58:15.767608 | orchestrator | Tuesday 07 April 2026 00:55:49 +0000 (0:00:00.512) 0:03:53.349 ********* 2026-04-07 00:58:15.767612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767621 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767632 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767644 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767653 | orchestrator | 2026-04-07 00:58:15.767656 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-07 00:58:15.767660 | orchestrator | Tuesday 07 April 2026 00:55:50 +0000 (0:00:01.318) 0:03:54.668 ********* 2026-04-07 00:58:15.767664 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767668 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767672 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767676 | orchestrator | 2026-04-07 00:58:15.767679 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-07 00:58:15.767683 | orchestrator | Tuesday 07 April 2026 00:55:51 +0000 (0:00:01.411) 0:03:56.079 ********* 2026-04-07 00:58:15.767687 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767691 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767694 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767698 | orchestrator | 2026-04-07 00:58:15.767702 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-07 00:58:15.767705 | orchestrator | Tuesday 07 April 2026 00:55:53 +0000 (0:00:02.236) 0:03:58.316 ********* 2026-04-07 00:58:15.767709 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.767713 | orchestrator | 2026-04-07 00:58:15.767717 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-07 00:58:15.767721 | orchestrator | Tuesday 07 April 2026 00:55:55 +0000 (0:00:01.490) 0:03:59.806 ********* 2026-04-07 00:58:15.767728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.767770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767781 | orchestrator | 2026-04-07 00:58:15.767785 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-07 00:58:15.767789 | orchestrator | Tuesday 07 April 2026 00:55:59 +0000 (0:00:04.386) 0:04:04.192 ********* 2026-04-07 00:58:15.767795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767810 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.767842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.767852 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767856 | orchestrator | 2026-04-07 00:58:15.767860 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-07 00:58:15.767864 | orchestrator | Tuesday 07 April 2026 00:56:00 +0000 (0:00:00.573) 0:04:04.765 ********* 2026-04-07 00:58:15.767868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767884 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.767887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767905 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.767909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 00:58:15.767927 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.767936 | orchestrator | 2026-04-07 00:58:15.767939 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-07 00:58:15.767943 | orchestrator | Tuesday 07 April 2026 00:56:01 +0000 (0:00:00.839) 0:04:05.605 ********* 2026-04-07 00:58:15.767947 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767951 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767954 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767958 | orchestrator | 2026-04-07 00:58:15.767962 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-07 00:58:15.767966 | orchestrator | Tuesday 07 April 2026 00:56:02 +0000 (0:00:01.602) 0:04:07.207 ********* 2026-04-07 00:58:15.767970 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.767973 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.767977 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.767981 | orchestrator | 2026-04-07 00:58:15.767984 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-07 00:58:15.767988 | orchestrator | Tuesday 07 April 2026 00:56:05 +0000 (0:00:02.202) 0:04:09.409 ********* 2026-04-07 00:58:15.767992 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.767996 | orchestrator | 2026-04-07 00:58:15.767999 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-07 00:58:15.768003 | orchestrator | Tuesday 07 April 2026 00:56:06 +0000 (0:00:01.253) 0:04:10.663 ********* 2026-04-07 00:58:15.768007 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-07 00:58:15.768011 | orchestrator | 2026-04-07 00:58:15.768015 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-07 00:58:15.768019 | orchestrator | Tuesday 07 April 2026 00:56:07 +0000 (0:00:01.147) 0:04:11.811 ********* 2026-04-07 00:58:15.768023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 00:58:15.768027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 00:58:15.768031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 00:58:15.768035 | orchestrator | 2026-04-07 00:58:15.768039 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-07 00:58:15.768043 | orchestrator | Tuesday 07 April 2026 00:56:11 +0000 (0:00:03.622) 0:04:15.434 ********* 2026-04-07 00:58:15.768049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768057 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768066 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768074 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768078 | orchestrator | 2026-04-07 00:58:15.768082 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-07 00:58:15.768086 | orchestrator | Tuesday 07 April 2026 00:56:12 +0000 (0:00:01.199) 0:04:16.634 ********* 2026-04-07 00:58:15.768089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768101 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768113 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 00:58:15.768124 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768128 | orchestrator | 2026-04-07 00:58:15.768132 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 00:58:15.768136 | orchestrator | Tuesday 07 April 2026 00:56:13 +0000 (0:00:01.660) 0:04:18.294 ********* 2026-04-07 00:58:15.768139 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.768143 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.768147 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.768175 | orchestrator | 2026-04-07 00:58:15.768179 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 00:58:15.768183 | orchestrator | Tuesday 07 April 2026 00:56:16 +0000 (0:00:02.472) 0:04:20.767 ********* 2026-04-07 00:58:15.768187 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.768191 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.768194 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.768198 | orchestrator | 2026-04-07 00:58:15.768202 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-07 00:58:15.768206 | orchestrator | Tuesday 07 April 2026 00:56:19 +0000 (0:00:02.751) 0:04:23.518 ********* 2026-04-07 00:58:15.768212 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-07 00:58:15.768216 | orchestrator | 2026-04-07 00:58:15.768220 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-07 00:58:15.768224 | orchestrator | Tuesday 07 April 2026 00:56:20 +0000 (0:00:00.835) 0:04:24.354 ********* 2026-04-07 00:58:15.768228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768232 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768244 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768253 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768257 | orchestrator | 2026-04-07 00:58:15.768262 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-07 00:58:15.768266 | orchestrator | Tuesday 07 April 2026 00:56:21 +0000 (0:00:01.327) 0:04:25.681 ********* 2026-04-07 00:58:15.768271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768275 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768291 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 00:58:15.768300 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768304 | orchestrator | 2026-04-07 00:58:15.768309 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-07 00:58:15.768313 | orchestrator | Tuesday 07 April 2026 00:56:22 +0000 (0:00:01.562) 0:04:27.243 ********* 2026-04-07 00:58:15.768318 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768323 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768330 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768334 | orchestrator | 2026-04-07 00:58:15.768339 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 00:58:15.768343 | orchestrator | Tuesday 07 April 2026 00:56:24 +0000 (0:00:01.231) 0:04:28.475 ********* 2026-04-07 00:58:15.768348 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.768353 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.768357 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.768362 | orchestrator | 2026-04-07 00:58:15.768366 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 00:58:15.768370 | orchestrator | Tuesday 07 April 2026 00:56:26 +0000 (0:00:02.600) 0:04:31.076 ********* 2026-04-07 00:58:15.768375 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.768379 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.768383 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.768388 | orchestrator | 2026-04-07 00:58:15.768393 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-07 00:58:15.768397 | orchestrator | Tuesday 07 April 2026 00:56:29 +0000 (0:00:03.058) 0:04:34.134 ********* 2026-04-07 00:58:15.768402 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-07 00:58:15.768406 | orchestrator | 2026-04-07 00:58:15.768412 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-07 00:58:15.768417 | orchestrator | Tuesday 07 April 2026 00:56:30 +0000 (0:00:00.749) 0:04:34.883 ********* 2026-04-07 00:58:15.768421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768426 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768438 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768448 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768452 | orchestrator | 2026-04-07 00:58:15.768457 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-07 00:58:15.768461 | orchestrator | Tuesday 07 April 2026 00:56:31 +0000 (0:00:01.118) 0:04:36.002 ********* 2026-04-07 00:58:15.768466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768470 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768479 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 00:58:15.768490 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768495 | orchestrator | 2026-04-07 00:58:15.768499 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-07 00:58:15.768504 | orchestrator | Tuesday 07 April 2026 00:56:32 +0000 (0:00:01.053) 0:04:37.055 ********* 2026-04-07 00:58:15.768508 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768513 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768517 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768522 | orchestrator | 2026-04-07 00:58:15.768526 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 00:58:15.768532 | orchestrator | Tuesday 07 April 2026 00:56:33 +0000 (0:00:01.263) 0:04:38.318 ********* 2026-04-07 00:58:15.768537 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.768541 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.768545 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.768550 | orchestrator | 2026-04-07 00:58:15.768554 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 00:58:15.768562 | orchestrator | Tuesday 07 April 2026 00:56:36 +0000 (0:00:02.353) 0:04:40.671 ********* 2026-04-07 00:58:15.768566 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.768570 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.768575 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.768579 | orchestrator | 2026-04-07 00:58:15.768584 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-07 00:58:15.768589 | orchestrator | Tuesday 07 April 2026 00:56:39 +0000 (0:00:02.923) 0:04:43.594 ********* 2026-04-07 00:58:15.768594 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.768598 | orchestrator | 2026-04-07 00:58:15.768603 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-07 00:58:15.768607 | orchestrator | Tuesday 07 April 2026 00:56:40 +0000 (0:00:01.345) 0:04:44.940 ********* 2026-04-07 00:58:15.768611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.768616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.768645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.768671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768688 | orchestrator | 2026-04-07 00:58:15.768691 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-07 00:58:15.768695 | orchestrator | Tuesday 07 April 2026 00:56:44 +0000 (0:00:03.638) 0:04:48.578 ********* 2026-04-07 00:58:15.768701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.768710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768728 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.768738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768759 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.768767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 00:58:15.768771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 00:58:15.768787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 00:58:15.768791 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768794 | orchestrator | 2026-04-07 00:58:15.768798 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-07 00:58:15.768802 | orchestrator | Tuesday 07 April 2026 00:56:45 +0000 (0:00:01.121) 0:04:49.700 ********* 2026-04-07 00:58:15.768806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768814 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.768818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768825 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.768829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 00:58:15.768837 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.768841 | orchestrator | 2026-04-07 00:58:15.768844 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-07 00:58:15.768848 | orchestrator | Tuesday 07 April 2026 00:56:46 +0000 (0:00:00.912) 0:04:50.613 ********* 2026-04-07 00:58:15.768852 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.768856 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.768859 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.768863 | orchestrator | 2026-04-07 00:58:15.768867 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-07 00:58:15.768871 | orchestrator | Tuesday 07 April 2026 00:56:47 +0000 (0:00:01.443) 0:04:52.057 ********* 2026-04-07 00:58:15.768875 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.768878 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.768882 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.768888 | orchestrator | 2026-04-07 00:58:15.768892 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-07 00:58:15.768896 | orchestrator | Tuesday 07 April 2026 00:56:50 +0000 (0:00:02.310) 0:04:54.367 ********* 2026-04-07 00:58:15.768900 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.768903 | orchestrator | 2026-04-07 00:58:15.768907 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-07 00:58:15.768911 | orchestrator | Tuesday 07 April 2026 00:56:51 +0000 (0:00:01.625) 0:04:55.993 ********* 2026-04-07 00:58:15.768916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 00:58:15.768922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 00:58:15.768927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 00:58:15.768982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 00:58:15.769002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 00:58:15.769124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 00:58:15.769131 | orchestrator | 2026-04-07 00:58:15.769135 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-07 00:58:15.769139 | orchestrator | Tuesday 07 April 2026 00:56:56 +0000 (0:00:05.018) 0:05:01.011 ********* 2026-04-07 00:58:15.769144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 00:58:15.769148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 00:58:15.769170 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 00:58:15.769184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 00:58:15.769189 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 00:58:15.769201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 00:58:15.769208 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769212 | orchestrator | 2026-04-07 00:58:15.769216 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-07 00:58:15.769220 | orchestrator | Tuesday 07 April 2026 00:56:57 +0000 (0:00:01.046) 0:05:02.058 ********* 2026-04-07 00:58:15.769224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 00:58:15.769228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769238 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 00:58:15.769248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769256 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 00:58:15.769269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 00:58:15.769277 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769281 | orchestrator | 2026-04-07 00:58:15.769285 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-07 00:58:15.769289 | orchestrator | Tuesday 07 April 2026 00:56:59 +0000 (0:00:01.546) 0:05:03.604 ********* 2026-04-07 00:58:15.769293 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769297 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769301 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769305 | orchestrator | 2026-04-07 00:58:15.769308 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-07 00:58:15.769312 | orchestrator | Tuesday 07 April 2026 00:56:59 +0000 (0:00:00.424) 0:05:04.028 ********* 2026-04-07 00:58:15.769316 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769323 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769326 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769330 | orchestrator | 2026-04-07 00:58:15.769334 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-07 00:58:15.769338 | orchestrator | Tuesday 07 April 2026 00:57:01 +0000 (0:00:01.357) 0:05:05.386 ********* 2026-04-07 00:58:15.769341 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.769345 | orchestrator | 2026-04-07 00:58:15.769349 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-07 00:58:15.769353 | orchestrator | Tuesday 07 April 2026 00:57:02 +0000 (0:00:01.627) 0:05:07.013 ********* 2026-04-07 00:58:15.769357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 00:58:15.769361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 00:58:15.769393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 00:58:15.769431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 00:58:15.769471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 00:58:15.769513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 00:58:15.769558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769577 | orchestrator | 2026-04-07 00:58:15.769580 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-07 00:58:15.769584 | orchestrator | Tuesday 07 April 2026 00:57:06 +0000 (0:00:04.290) 0:05:11.304 ********* 2026-04-07 00:58:15.769591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 00:58:15.769598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 00:58:15.769622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769643 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 00:58:15.769654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 00:58:15.769659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 00:58:15.769672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 00:58:15.769708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 00:58:15.769713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 00:58:15.769724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 00:58:15.769751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 00:58:15.769759 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769763 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769766 | orchestrator | 2026-04-07 00:58:15.769770 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-07 00:58:15.769774 | orchestrator | Tuesday 07 April 2026 00:57:07 +0000 (0:00:00.816) 0:05:12.120 ********* 2026-04-07 00:58:15.769778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769798 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 00:58:15.769850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 00:58:15.769863 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769869 | orchestrator | 2026-04-07 00:58:15.769875 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-07 00:58:15.769881 | orchestrator | Tuesday 07 April 2026 00:57:09 +0000 (0:00:01.232) 0:05:13.353 ********* 2026-04-07 00:58:15.769887 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769895 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769898 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769902 | orchestrator | 2026-04-07 00:58:15.769906 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-07 00:58:15.769910 | orchestrator | Tuesday 07 April 2026 00:57:09 +0000 (0:00:00.452) 0:05:13.805 ********* 2026-04-07 00:58:15.769914 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.769917 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.769921 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.769925 | orchestrator | 2026-04-07 00:58:15.769929 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-07 00:58:15.769933 | orchestrator | Tuesday 07 April 2026 00:57:10 +0000 (0:00:01.287) 0:05:15.093 ********* 2026-04-07 00:58:15.769940 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.769944 | orchestrator | 2026-04-07 00:58:15.769948 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-07 00:58:15.769952 | orchestrator | Tuesday 07 April 2026 00:57:12 +0000 (0:00:01.456) 0:05:16.550 ********* 2026-04-07 00:58:15.769956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:58:15.769965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:58:15.769973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 00:58:15.769977 | orchestrator | 2026-04-07 00:58:15.769981 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-07 00:58:15.769985 | orchestrator | Tuesday 07 April 2026 00:57:14 +0000 (0:00:02.701) 0:05:19.252 ********* 2026-04-07 00:58:15.769989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 00:58:15.769996 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 00:58:15.770004 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 00:58:15.770047 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770051 | orchestrator | 2026-04-07 00:58:15.770057 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-07 00:58:15.770062 | orchestrator | Tuesday 07 April 2026 00:57:15 +0000 (0:00:00.419) 0:05:19.672 ********* 2026-04-07 00:58:15.770066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 00:58:15.770070 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 00:58:15.770078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 00:58:15.770085 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770089 | orchestrator | 2026-04-07 00:58:15.770093 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-07 00:58:15.770097 | orchestrator | Tuesday 07 April 2026 00:57:15 +0000 (0:00:00.617) 0:05:20.289 ********* 2026-04-07 00:58:15.770100 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770109 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770113 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770116 | orchestrator | 2026-04-07 00:58:15.770120 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-07 00:58:15.770124 | orchestrator | Tuesday 07 April 2026 00:57:16 +0000 (0:00:00.803) 0:05:21.093 ********* 2026-04-07 00:58:15.770128 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770132 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770137 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770143 | orchestrator | 2026-04-07 00:58:15.770172 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-07 00:58:15.770178 | orchestrator | Tuesday 07 April 2026 00:57:18 +0000 (0:00:01.568) 0:05:22.661 ********* 2026-04-07 00:58:15.770185 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:58:15.770192 | orchestrator | 2026-04-07 00:58:15.770196 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-07 00:58:15.770200 | orchestrator | Tuesday 07 April 2026 00:57:19 +0000 (0:00:01.430) 0:05:24.091 ********* 2026-04-07 00:58:15.770204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 00:58:15.770239 | orchestrator | 2026-04-07 00:58:15.770246 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-07 00:58:15.770249 | orchestrator | Tuesday 07 April 2026 00:57:26 +0000 (0:00:06.304) 0:05:30.396 ********* 2026-04-07 00:58:15.770256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770267 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770279 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 00:58:15.770299 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770303 | orchestrator | 2026-04-07 00:58:15.770307 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-07 00:58:15.770310 | orchestrator | Tuesday 07 April 2026 00:57:27 +0000 (0:00:01.060) 0:05:31.456 ********* 2026-04-07 00:58:15.770314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770330 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770349 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 00:58:15.770373 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770377 | orchestrator | 2026-04-07 00:58:15.770381 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-07 00:58:15.770385 | orchestrator | Tuesday 07 April 2026 00:57:28 +0000 (0:00:00.984) 0:05:32.440 ********* 2026-04-07 00:58:15.770389 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.770392 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.770396 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.770400 | orchestrator | 2026-04-07 00:58:15.770406 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-07 00:58:15.770410 | orchestrator | Tuesday 07 April 2026 00:57:29 +0000 (0:00:01.438) 0:05:33.878 ********* 2026-04-07 00:58:15.770413 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.770417 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.770421 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.770425 | orchestrator | 2026-04-07 00:58:15.770428 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-07 00:58:15.770432 | orchestrator | Tuesday 07 April 2026 00:57:31 +0000 (0:00:02.315) 0:05:36.194 ********* 2026-04-07 00:58:15.770436 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770440 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770443 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770447 | orchestrator | 2026-04-07 00:58:15.770451 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-07 00:58:15.770455 | orchestrator | Tuesday 07 April 2026 00:57:32 +0000 (0:00:00.652) 0:05:36.846 ********* 2026-04-07 00:58:15.770459 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770462 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770466 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770470 | orchestrator | 2026-04-07 00:58:15.770474 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-07 00:58:15.770477 | orchestrator | Tuesday 07 April 2026 00:57:32 +0000 (0:00:00.335) 0:05:37.181 ********* 2026-04-07 00:58:15.770481 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770485 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770489 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770492 | orchestrator | 2026-04-07 00:58:15.770496 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-07 00:58:15.770500 | orchestrator | Tuesday 07 April 2026 00:57:33 +0000 (0:00:00.328) 0:05:37.510 ********* 2026-04-07 00:58:15.770503 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770507 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770511 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770515 | orchestrator | 2026-04-07 00:58:15.770518 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-07 00:58:15.770522 | orchestrator | Tuesday 07 April 2026 00:57:33 +0000 (0:00:00.321) 0:05:37.831 ********* 2026-04-07 00:58:15.770526 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770530 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770533 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770537 | orchestrator | 2026-04-07 00:58:15.770541 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-07 00:58:15.770544 | orchestrator | Tuesday 07 April 2026 00:57:34 +0000 (0:00:00.707) 0:05:38.539 ********* 2026-04-07 00:58:15.770548 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770552 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770556 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770560 | orchestrator | 2026-04-07 00:58:15.770563 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-07 00:58:15.770567 | orchestrator | Tuesday 07 April 2026 00:57:34 +0000 (0:00:00.552) 0:05:39.091 ********* 2026-04-07 00:58:15.770571 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770580 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770584 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770588 | orchestrator | 2026-04-07 00:58:15.770591 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-07 00:58:15.770596 | orchestrator | Tuesday 07 April 2026 00:57:35 +0000 (0:00:00.723) 0:05:39.815 ********* 2026-04-07 00:58:15.770602 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770607 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770611 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770615 | orchestrator | 2026-04-07 00:58:15.770619 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-07 00:58:15.770623 | orchestrator | Tuesday 07 April 2026 00:57:36 +0000 (0:00:00.779) 0:05:40.594 ********* 2026-04-07 00:58:15.770626 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770630 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770634 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770637 | orchestrator | 2026-04-07 00:58:15.770641 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-07 00:58:15.770645 | orchestrator | Tuesday 07 April 2026 00:57:37 +0000 (0:00:01.026) 0:05:41.621 ********* 2026-04-07 00:58:15.770649 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770652 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770656 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770660 | orchestrator | 2026-04-07 00:58:15.770664 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-07 00:58:15.770667 | orchestrator | Tuesday 07 April 2026 00:57:38 +0000 (0:00:00.967) 0:05:42.589 ********* 2026-04-07 00:58:15.770671 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770675 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770681 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770685 | orchestrator | 2026-04-07 00:58:15.770688 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-07 00:58:15.770692 | orchestrator | Tuesday 07 April 2026 00:57:39 +0000 (0:00:00.990) 0:05:43.580 ********* 2026-04-07 00:58:15.770696 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.770700 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.770706 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.770712 | orchestrator | 2026-04-07 00:58:15.770718 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-07 00:58:15.770725 | orchestrator | Tuesday 07 April 2026 00:57:47 +0000 (0:00:08.415) 0:05:51.995 ********* 2026-04-07 00:58:15.770731 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770737 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770743 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770749 | orchestrator | 2026-04-07 00:58:15.770755 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-07 00:58:15.770761 | orchestrator | Tuesday 07 April 2026 00:57:48 +0000 (0:00:01.310) 0:05:53.305 ********* 2026-04-07 00:58:15.770765 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.770768 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.770772 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.770776 | orchestrator | 2026-04-07 00:58:15.770780 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-07 00:58:15.770786 | orchestrator | Tuesday 07 April 2026 00:57:57 +0000 (0:00:08.376) 0:06:01.682 ********* 2026-04-07 00:58:15.770790 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.770794 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.770798 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.770801 | orchestrator | 2026-04-07 00:58:15.770807 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-07 00:58:15.770813 | orchestrator | Tuesday 07 April 2026 00:58:01 +0000 (0:00:03.817) 0:06:05.500 ********* 2026-04-07 00:58:15.770818 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:58:15.770824 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:58:15.770830 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:58:15.770840 | orchestrator | 2026-04-07 00:58:15.770845 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-07 00:58:15.770852 | orchestrator | Tuesday 07 April 2026 00:58:05 +0000 (0:00:04.457) 0:06:09.957 ********* 2026-04-07 00:58:15.770858 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770864 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770870 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770876 | orchestrator | 2026-04-07 00:58:15.770883 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-07 00:58:15.770887 | orchestrator | Tuesday 07 April 2026 00:58:06 +0000 (0:00:00.693) 0:06:10.651 ********* 2026-04-07 00:58:15.770891 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770894 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770898 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770902 | orchestrator | 2026-04-07 00:58:15.770906 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-07 00:58:15.770909 | orchestrator | Tuesday 07 April 2026 00:58:06 +0000 (0:00:00.352) 0:06:11.004 ********* 2026-04-07 00:58:15.770913 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770917 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770920 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770924 | orchestrator | 2026-04-07 00:58:15.770928 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-07 00:58:15.770932 | orchestrator | Tuesday 07 April 2026 00:58:06 +0000 (0:00:00.338) 0:06:11.342 ********* 2026-04-07 00:58:15.770935 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770939 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770943 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770946 | orchestrator | 2026-04-07 00:58:15.770950 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-07 00:58:15.770954 | orchestrator | Tuesday 07 April 2026 00:58:07 +0000 (0:00:00.332) 0:06:11.675 ********* 2026-04-07 00:58:15.770958 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770961 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770965 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770969 | orchestrator | 2026-04-07 00:58:15.770973 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-07 00:58:15.770976 | orchestrator | Tuesday 07 April 2026 00:58:08 +0000 (0:00:00.704) 0:06:12.380 ********* 2026-04-07 00:58:15.770980 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:58:15.770984 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:58:15.770987 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:58:15.770991 | orchestrator | 2026-04-07 00:58:15.770995 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-07 00:58:15.770999 | orchestrator | Tuesday 07 April 2026 00:58:08 +0000 (0:00:00.349) 0:06:12.730 ********* 2026-04-07 00:58:15.771002 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.771006 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.771010 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.771014 | orchestrator | 2026-04-07 00:58:15.771017 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-07 00:58:15.771021 | orchestrator | Tuesday 07 April 2026 00:58:13 +0000 (0:00:04.743) 0:06:17.473 ********* 2026-04-07 00:58:15.771025 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:58:15.771028 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:58:15.771032 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:58:15.771036 | orchestrator | 2026-04-07 00:58:15.771040 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:58:15.771044 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 00:58:15.771048 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 00:58:15.771059 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 00:58:15.771063 | orchestrator | 2026-04-07 00:58:15.771067 | orchestrator | 2026-04-07 00:58:15.771071 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:58:15.771074 | orchestrator | Tuesday 07 April 2026 00:58:14 +0000 (0:00:01.009) 0:06:18.483 ********* 2026-04-07 00:58:15.771078 | orchestrator | =============================================================================== 2026-04-07 00:58:15.771082 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.42s 2026-04-07 00:58:15.771086 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.38s 2026-04-07 00:58:15.771089 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.72s 2026-04-07 00:58:15.771093 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.30s 2026-04-07 00:58:15.771097 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.00s 2026-04-07 00:58:15.771100 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.04s 2026-04-07 00:58:15.771104 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.02s 2026-04-07 00:58:15.771108 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.86s 2026-04-07 00:58:15.771115 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.74s 2026-04-07 00:58:15.771119 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.74s 2026-04-07 00:58:15.771122 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.72s 2026-04-07 00:58:15.771126 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.52s 2026-04-07 00:58:15.771130 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.46s 2026-04-07 00:58:15.771134 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.39s 2026-04-07 00:58:15.771137 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.29s 2026-04-07 00:58:15.771141 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.16s 2026-04-07 00:58:15.771145 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.05s 2026-04-07 00:58:15.771148 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.89s 2026-04-07 00:58:15.771172 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.82s 2026-04-07 00:58:15.771176 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.64s 2026-04-07 00:58:15.771180 | orchestrator | 2026-04-07 00:58:15 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:15.771184 | orchestrator | 2026-04-07 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:18.792709 | orchestrator | 2026-04-07 00:58:18 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:18.792793 | orchestrator | 2026-04-07 00:58:18 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:18.792802 | orchestrator | 2026-04-07 00:58:18 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:18.792811 | orchestrator | 2026-04-07 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:21.823803 | orchestrator | 2026-04-07 00:58:21 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:21.825028 | orchestrator | 2026-04-07 00:58:21 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:21.826183 | orchestrator | 2026-04-07 00:58:21 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:21.826271 | orchestrator | 2026-04-07 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:24.856535 | orchestrator | 2026-04-07 00:58:24 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:24.857089 | orchestrator | 2026-04-07 00:58:24 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:24.857820 | orchestrator | 2026-04-07 00:58:24 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:24.857923 | orchestrator | 2026-04-07 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:27.894410 | orchestrator | 2026-04-07 00:58:27 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:27.894637 | orchestrator | 2026-04-07 00:58:27 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:27.895819 | orchestrator | 2026-04-07 00:58:27 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:27.895989 | orchestrator | 2026-04-07 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:30.919269 | orchestrator | 2026-04-07 00:58:30 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:30.920255 | orchestrator | 2026-04-07 00:58:30 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:30.920293 | orchestrator | 2026-04-07 00:58:30 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:30.920304 | orchestrator | 2026-04-07 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:33.951134 | orchestrator | 2026-04-07 00:58:33 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:33.951719 | orchestrator | 2026-04-07 00:58:33 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:33.953162 | orchestrator | 2026-04-07 00:58:33 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:33.953225 | orchestrator | 2026-04-07 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:36.986763 | orchestrator | 2026-04-07 00:58:36 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:36.986844 | orchestrator | 2026-04-07 00:58:36 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:36.987910 | orchestrator | 2026-04-07 00:58:36 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:36.987937 | orchestrator | 2026-04-07 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:40.116248 | orchestrator | 2026-04-07 00:58:40 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:40.116998 | orchestrator | 2026-04-07 00:58:40 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:40.117851 | orchestrator | 2026-04-07 00:58:40 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:40.117880 | orchestrator | 2026-04-07 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:43.158799 | orchestrator | 2026-04-07 00:58:43 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:43.160704 | orchestrator | 2026-04-07 00:58:43 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:43.163636 | orchestrator | 2026-04-07 00:58:43 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:43.163679 | orchestrator | 2026-04-07 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:46.217970 | orchestrator | 2026-04-07 00:58:46 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:46.218080 | orchestrator | 2026-04-07 00:58:46 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:46.219524 | orchestrator | 2026-04-07 00:58:46 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:46.219552 | orchestrator | 2026-04-07 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:49.274963 | orchestrator | 2026-04-07 00:58:49 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:49.277830 | orchestrator | 2026-04-07 00:58:49 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:49.280890 | orchestrator | 2026-04-07 00:58:49 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:49.281716 | orchestrator | 2026-04-07 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:52.332672 | orchestrator | 2026-04-07 00:58:52 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:52.334368 | orchestrator | 2026-04-07 00:58:52 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:52.336255 | orchestrator | 2026-04-07 00:58:52 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:52.336398 | orchestrator | 2026-04-07 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:55.387688 | orchestrator | 2026-04-07 00:58:55 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:55.390079 | orchestrator | 2026-04-07 00:58:55 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:55.392209 | orchestrator | 2026-04-07 00:58:55 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:55.392257 | orchestrator | 2026-04-07 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:58:58.429064 | orchestrator | 2026-04-07 00:58:58 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:58:58.432247 | orchestrator | 2026-04-07 00:58:58 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:58:58.435065 | orchestrator | 2026-04-07 00:58:58 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:58:58.436121 | orchestrator | 2026-04-07 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:01.482120 | orchestrator | 2026-04-07 00:59:01 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:01.483912 | orchestrator | 2026-04-07 00:59:01 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:01.486009 | orchestrator | 2026-04-07 00:59:01 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:01.488378 | orchestrator | 2026-04-07 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:04.529626 | orchestrator | 2026-04-07 00:59:04 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:04.530180 | orchestrator | 2026-04-07 00:59:04 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:04.531224 | orchestrator | 2026-04-07 00:59:04 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:04.531383 | orchestrator | 2026-04-07 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:07.580754 | orchestrator | 2026-04-07 00:59:07 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:07.581363 | orchestrator | 2026-04-07 00:59:07 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:07.584246 | orchestrator | 2026-04-07 00:59:07 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:07.584301 | orchestrator | 2026-04-07 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:10.630492 | orchestrator | 2026-04-07 00:59:10 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:10.632138 | orchestrator | 2026-04-07 00:59:10 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:10.634282 | orchestrator | 2026-04-07 00:59:10 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:10.634336 | orchestrator | 2026-04-07 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:13.675651 | orchestrator | 2026-04-07 00:59:13 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:13.677995 | orchestrator | 2026-04-07 00:59:13 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:13.681021 | orchestrator | 2026-04-07 00:59:13 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:13.681152 | orchestrator | 2026-04-07 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:16.722775 | orchestrator | 2026-04-07 00:59:16 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:16.725607 | orchestrator | 2026-04-07 00:59:16 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:16.728040 | orchestrator | 2026-04-07 00:59:16 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:16.728184 | orchestrator | 2026-04-07 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:19.786083 | orchestrator | 2026-04-07 00:59:19 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:19.788416 | orchestrator | 2026-04-07 00:59:19 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:19.790393 | orchestrator | 2026-04-07 00:59:19 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:19.790593 | orchestrator | 2026-04-07 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:22.836074 | orchestrator | 2026-04-07 00:59:22 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:22.838143 | orchestrator | 2026-04-07 00:59:22 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:22.839779 | orchestrator | 2026-04-07 00:59:22 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:22.839925 | orchestrator | 2026-04-07 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:25.892588 | orchestrator | 2026-04-07 00:59:25 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:25.895388 | orchestrator | 2026-04-07 00:59:25 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:25.896704 | orchestrator | 2026-04-07 00:59:25 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:25.896808 | orchestrator | 2026-04-07 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:28.961171 | orchestrator | 2026-04-07 00:59:28 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:28.961299 | orchestrator | 2026-04-07 00:59:28 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:28.961328 | orchestrator | 2026-04-07 00:59:28 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:28.961333 | orchestrator | 2026-04-07 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:32.006740 | orchestrator | 2026-04-07 00:59:31 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:32.009072 | orchestrator | 2026-04-07 00:59:32 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:32.013375 | orchestrator | 2026-04-07 00:59:32 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:32.013457 | orchestrator | 2026-04-07 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:35.072379 | orchestrator | 2026-04-07 00:59:35 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:35.075035 | orchestrator | 2026-04-07 00:59:35 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:35.078593 | orchestrator | 2026-04-07 00:59:35 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:35.078675 | orchestrator | 2026-04-07 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:38.124976 | orchestrator | 2026-04-07 00:59:38 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:38.126008 | orchestrator | 2026-04-07 00:59:38 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:38.127256 | orchestrator | 2026-04-07 00:59:38 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:38.127282 | orchestrator | 2026-04-07 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:41.192861 | orchestrator | 2026-04-07 00:59:41 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:41.195573 | orchestrator | 2026-04-07 00:59:41 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:41.197422 | orchestrator | 2026-04-07 00:59:41 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:41.197491 | orchestrator | 2026-04-07 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:44.247095 | orchestrator | 2026-04-07 00:59:44 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:44.248592 | orchestrator | 2026-04-07 00:59:44 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:44.250162 | orchestrator | 2026-04-07 00:59:44 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:44.250245 | orchestrator | 2026-04-07 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:47.296844 | orchestrator | 2026-04-07 00:59:47 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:47.300497 | orchestrator | 2026-04-07 00:59:47 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:47.302130 | orchestrator | 2026-04-07 00:59:47 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:47.302301 | orchestrator | 2026-04-07 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:50.341093 | orchestrator | 2026-04-07 00:59:50 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:50.342841 | orchestrator | 2026-04-07 00:59:50 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:50.345280 | orchestrator | 2026-04-07 00:59:50 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:50.345358 | orchestrator | 2026-04-07 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:53.386076 | orchestrator | 2026-04-07 00:59:53 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:53.386192 | orchestrator | 2026-04-07 00:59:53 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:53.387172 | orchestrator | 2026-04-07 00:59:53 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:53.387210 | orchestrator | 2026-04-07 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:56.423400 | orchestrator | 2026-04-07 00:59:56 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:56.427739 | orchestrator | 2026-04-07 00:59:56 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:56.429137 | orchestrator | 2026-04-07 00:59:56 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state STARTED 2026-04-07 00:59:56.429186 | orchestrator | 2026-04-07 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 00:59:59.481346 | orchestrator | 2026-04-07 00:59:59 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 00:59:59.483858 | orchestrator | 2026-04-07 00:59:59 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 00:59:59.487979 | orchestrator | 2026-04-07 00:59:59 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 00:59:59.494964 | orchestrator | 2026-04-07 00:59:59 | INFO  | Task 416047b1-3b0d-46e6-9711-0fd037214fb6 is in state SUCCESS 2026-04-07 00:59:59.495936 | orchestrator | 2026-04-07 00:59:59.495959 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 00:59:59.495964 | orchestrator | 2.16.14 2026-04-07 00:59:59.495968 | orchestrator | 2026-04-07 00:59:59.495972 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-07 00:59:59.495976 | orchestrator | 2026-04-07 00:59:59.495980 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 00:59:59.495985 | orchestrator | Tuesday 07 April 2026 00:49:31 +0000 (0:00:00.717) 0:00:00.717 ********* 2026-04-07 00:59:59.496002 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.496008 | orchestrator | 2026-04-07 00:59:59.496012 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 00:59:59.496016 | orchestrator | Tuesday 07 April 2026 00:49:33 +0000 (0:00:01.196) 0:00:01.913 ********* 2026-04-07 00:59:59.496020 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496024 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496029 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496032 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496036 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496040 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496044 | orchestrator | 2026-04-07 00:59:59.496048 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 00:59:59.496052 | orchestrator | Tuesday 07 April 2026 00:49:34 +0000 (0:00:01.634) 0:00:03.548 ********* 2026-04-07 00:59:59.496058 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496064 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496067 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496071 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496075 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496079 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496082 | orchestrator | 2026-04-07 00:59:59.496086 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 00:59:59.496100 | orchestrator | Tuesday 07 April 2026 00:49:35 +0000 (0:00:00.888) 0:00:04.436 ********* 2026-04-07 00:59:59.496104 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496108 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496112 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496115 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496119 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496123 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496126 | orchestrator | 2026-04-07 00:59:59.496130 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 00:59:59.496134 | orchestrator | Tuesday 07 April 2026 00:49:36 +0000 (0:00:01.365) 0:00:05.801 ********* 2026-04-07 00:59:59.496137 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496141 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496145 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496149 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496152 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496156 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496160 | orchestrator | 2026-04-07 00:59:59.496163 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 00:59:59.496167 | orchestrator | Tuesday 07 April 2026 00:49:38 +0000 (0:00:01.196) 0:00:06.998 ********* 2026-04-07 00:59:59.496183 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496187 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496191 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496195 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496198 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496202 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496206 | orchestrator | 2026-04-07 00:59:59.496210 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 00:59:59.496214 | orchestrator | Tuesday 07 April 2026 00:49:39 +0000 (0:00:01.200) 0:00:08.199 ********* 2026-04-07 00:59:59.496270 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496275 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496279 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496282 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496286 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496296 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496300 | orchestrator | 2026-04-07 00:59:59.496347 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 00:59:59.496352 | orchestrator | Tuesday 07 April 2026 00:49:40 +0000 (0:00:01.417) 0:00:09.616 ********* 2026-04-07 00:59:59.496355 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496366 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.496370 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.496374 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.496378 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.496384 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.496399 | orchestrator | 2026-04-07 00:59:59.496403 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 00:59:59.496406 | orchestrator | Tuesday 07 April 2026 00:49:41 +0000 (0:00:00.841) 0:00:10.457 ********* 2026-04-07 00:59:59.496410 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496414 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496418 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496434 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496438 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496442 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496466 | orchestrator | 2026-04-07 00:59:59.496470 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 00:59:59.496474 | orchestrator | Tuesday 07 April 2026 00:49:42 +0000 (0:00:00.810) 0:00:11.268 ********* 2026-04-07 00:59:59.496478 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.496482 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.496490 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.496494 | orchestrator | 2026-04-07 00:59:59.496497 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 00:59:59.496501 | orchestrator | Tuesday 07 April 2026 00:49:43 +0000 (0:00:00.613) 0:00:11.882 ********* 2026-04-07 00:59:59.496505 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496509 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496513 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496516 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496526 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496530 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496534 | orchestrator | 2026-04-07 00:59:59.496541 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 00:59:59.496547 | orchestrator | Tuesday 07 April 2026 00:49:44 +0000 (0:00:01.546) 0:00:13.428 ********* 2026-04-07 00:59:59.496554 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.496560 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.496565 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.496571 | orchestrator | 2026-04-07 00:59:59.496579 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 00:59:59.496583 | orchestrator | Tuesday 07 April 2026 00:49:47 +0000 (0:00:02.995) 0:00:16.424 ********* 2026-04-07 00:59:59.496587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.496591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.496594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.496598 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496602 | orchestrator | 2026-04-07 00:59:59.496606 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 00:59:59.496610 | orchestrator | Tuesday 07 April 2026 00:49:48 +0000 (0:00:01.205) 0:00:17.629 ********* 2026-04-07 00:59:59.496615 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496621 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496624 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496628 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496632 | orchestrator | 2026-04-07 00:59:59.496636 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 00:59:59.496640 | orchestrator | Tuesday 07 April 2026 00:49:50 +0000 (0:00:01.689) 0:00:19.319 ********* 2026-04-07 00:59:59.496645 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496651 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496661 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496666 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496669 | orchestrator | 2026-04-07 00:59:59.496673 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 00:59:59.496677 | orchestrator | Tuesday 07 April 2026 00:49:50 +0000 (0:00:00.308) 0:00:19.628 ********* 2026-04-07 00:59:59.496685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 00:49:45.414304', 'end': '2026-04-07 00:49:45.500222', 'delta': '0:00:00.085918', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 00:49:46.464020', 'end': '2026-04-07 00:49:46.531377', 'delta': '0:00:00.067357', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496695 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 00:49:47.200713', 'end': '2026-04-07 00:49:47.278828', 'delta': '0:00:00.078115', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.496699 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496703 | orchestrator | 2026-04-07 00:59:59.496707 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 00:59:59.496710 | orchestrator | Tuesday 07 April 2026 00:49:51 +0000 (0:00:00.864) 0:00:20.492 ********* 2026-04-07 00:59:59.496714 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.496718 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496722 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.496725 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.496729 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.496733 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.496736 | orchestrator | 2026-04-07 00:59:59.496740 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 00:59:59.496744 | orchestrator | Tuesday 07 April 2026 00:49:53 +0000 (0:00:02.140) 0:00:22.632 ********* 2026-04-07 00:59:59.496750 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.496754 | orchestrator | 2026-04-07 00:59:59.496759 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 00:59:59.496765 | orchestrator | Tuesday 07 April 2026 00:49:55 +0000 (0:00:01.378) 0:00:24.011 ********* 2026-04-07 00:59:59.496771 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496778 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.496784 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.496790 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.496796 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.496802 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.496809 | orchestrator | 2026-04-07 00:59:59.496815 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 00:59:59.496821 | orchestrator | Tuesday 07 April 2026 00:49:56 +0000 (0:00:01.093) 0:00:25.104 ********* 2026-04-07 00:59:59.496825 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.496833 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.496836 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.496840 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.496846 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.496850 | orchestrator | 2026-04-07 00:59:59.496853 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 00:59:59.496857 | orchestrator | Tuesday 07 April 2026 00:49:57 +0000 (0:00:00.917) 0:00:26.021 ********* 2026-04-07 00:59:59.496861 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496865 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.496881 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.496885 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.496889 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.496895 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.496900 | orchestrator | 2026-04-07 00:59:59.496904 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 00:59:59.496908 | orchestrator | Tuesday 07 April 2026 00:49:57 +0000 (0:00:00.757) 0:00:26.779 ********* 2026-04-07 00:59:59.496911 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496915 | orchestrator | 2026-04-07 00:59:59.496919 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 00:59:59.496923 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:00.122) 0:00:26.902 ********* 2026-04-07 00:59:59.496927 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496930 | orchestrator | 2026-04-07 00:59:59.496936 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 00:59:59.496941 | orchestrator | Tuesday 07 April 2026 00:49:58 +0000 (0:00:00.253) 0:00:27.157 ********* 2026-04-07 00:59:59.496945 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.496949 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.496953 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.496956 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.496960 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.496964 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.496977 | orchestrator | 2026-04-07 00:59:59.496985 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 00:59:59.496989 | orchestrator | Tuesday 07 April 2026 00:49:59 +0000 (0:00:00.720) 0:00:27.878 ********* 2026-04-07 00:59:59.497024 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497032 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497038 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497044 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497051 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497057 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497062 | orchestrator | 2026-04-07 00:59:59.497066 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 00:59:59.497086 | orchestrator | Tuesday 07 April 2026 00:50:00 +0000 (0:00:01.138) 0:00:29.016 ********* 2026-04-07 00:59:59.497090 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497094 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497098 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497102 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497105 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497117 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497122 | orchestrator | 2026-04-07 00:59:59.497125 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 00:59:59.497129 | orchestrator | Tuesday 07 April 2026 00:50:00 +0000 (0:00:00.827) 0:00:29.844 ********* 2026-04-07 00:59:59.497133 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497137 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497140 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497144 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497176 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497181 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497185 | orchestrator | 2026-04-07 00:59:59.497188 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 00:59:59.497192 | orchestrator | Tuesday 07 April 2026 00:50:01 +0000 (0:00:00.773) 0:00:30.618 ********* 2026-04-07 00:59:59.497196 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497200 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497203 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497207 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497211 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497215 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497230 | orchestrator | 2026-04-07 00:59:59.497234 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 00:59:59.497238 | orchestrator | Tuesday 07 April 2026 00:50:02 +0000 (0:00:00.859) 0:00:31.477 ********* 2026-04-07 00:59:59.497241 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497245 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497249 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497253 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497256 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497260 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497264 | orchestrator | 2026-04-07 00:59:59.497267 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 00:59:59.497271 | orchestrator | Tuesday 07 April 2026 00:50:04 +0000 (0:00:01.714) 0:00:33.192 ********* 2026-04-07 00:59:59.497275 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497279 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.497282 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.497286 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.497290 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.497294 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.497297 | orchestrator | 2026-04-07 00:59:59.497301 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 00:59:59.497305 | orchestrator | Tuesday 07 April 2026 00:50:05 +0000 (0:00:01.151) 0:00:34.343 ********* 2026-04-07 00:59:59.497312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part1', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part14', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part15', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part16', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.497399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.497408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497428 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.497432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part1', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part14', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part15', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part16', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.497445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.497449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.497545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part1', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part14', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part15', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part16', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498291 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.498299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb', 'dm-uuid-LVM-mZEZ9AEcVigBLCVKnQ6kQvuHeb6scNqtafvZSbe2zBaKe5Zscx1bDxau8nTCY3nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d', 'dm-uuid-LVM-kJxz3LjCmaVw5gnVhd5O9Lq30TLxbGyYnMbiBl81TypAzKu55NRLfqXyqo1atvPN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498359 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.498365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WhS6TY-smGD-0vTn-PSrp-JmLa-lOKo-hj7dKO', 'scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706', 'scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c', 'dm-uuid-LVM-RuzjjpGuKLhfgUSO0j9UbYZHMgVcRrMpS6o1eT39eBftYeXGtMpit0E42pIr0kUx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-upoiZR-Zew0-FZ2C-oske-9ezc-Kpbr-uderoV', 'scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4', 'scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73', 'scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db', 'dm-uuid-LVM-VvjF4eKbyQ2OsUFWPqkAeuu8RDIhsJqdSbu69fqEotkdp205IrUnOedu7OwbQzsf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498555 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.498560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zfGsI3-iIvv-uwmH-oqOE-dgq8-Rk0R-VsyNE0', 'scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30', 'scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eh3ESV-U19B-yaNr-BV5N-BXEc-oddH-ucsgyx', 'scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e', 'scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39', 'scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498597 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.498601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0', 'dm-uuid-LVM-F4n5dWigBqQIu532mQIWDLNYgUVJ3BiW6X8R8cxS1h8GruTxaNBrDSP8BCYV40NR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582', 'dm-uuid-LVM-xQmpgel33ejVPKRtIAxG6GhkzWbexzdvAlfpdstTkLoDf6WgX3pw0feGhHV3cgko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 00:59:59.498675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WpWuhZ-s2vi-68wW-8qq6-nf7r-XLSU-nWzndG', 'scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d', 'scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-F7fEoT-KAuk-uLWY-FeSF-tPj5-bvFy-p5511y', 'scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe', 'scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c', 'scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 00:59:59.498750 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.498757 | orchestrator | 2026-04-07 00:59:59.498764 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 00:59:59.498771 | orchestrator | Tuesday 07 April 2026 00:50:07 +0000 (0:00:02.475) 0:00:36.819 ********* 2026-04-07 00:59:59.498777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498788 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498794 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498799 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498809 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498816 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498848 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part1', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part14', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part15', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part16', 'scsi-SQEMU_QEMU_HARDDISK_3df5c3d7-f562-4b98-85e9-985d74ba8432-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498866 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498874 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498878 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498882 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498888 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498892 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498898 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498908 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498915 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.498924 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part1', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part14', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part15', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part16', 'scsi-SQEMU_QEMU_HARDDISK_63f89094-a177-4f34-9706-4c412ab91d72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498947 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498958 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498971 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498980 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498987 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.498998 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499010 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499021 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part1', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part14', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part15', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part16', 'scsi-SQEMU_QEMU_HARDDISK_92e9469d-beee-4970-a2a1-38a209111f07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499031 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.499043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb', 'dm-uuid-LVM-mZEZ9AEcVigBLCVKnQ6kQvuHeb6scNqtafvZSbe2zBaKe5Zscx1bDxau8nTCY3nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499051 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d', 'dm-uuid-LVM-kJxz3LjCmaVw5gnVhd5O9Lq30TLxbGyYnMbiBl81TypAzKu55NRLfqXyqo1atvPN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499056 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499072 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.499077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c', 'dm-uuid-LVM-RuzjjpGuKLhfgUSO0j9UbYZHMgVcRrMpS6o1eT39eBftYeXGtMpit0E42pIr0kUx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db', 'dm-uuid-LVM-VvjF4eKbyQ2OsUFWPqkAeuu8RDIhsJqdSbu69fqEotkdp205IrUnOedu7OwbQzsf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499119 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WhS6TY-smGD-0vTn-PSrp-JmLa-lOKo-hj7dKO', 'scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706', 'scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-upoiZR-Zew0-FZ2C-oske-9ezc-Kpbr-uderoV', 'scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4', 'scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73', 'scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499825 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zfGsI3-iIvv-uwmH-oqOE-dgq8-Rk0R-VsyNE0', 'scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30', 'scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499831 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eh3ESV-U19B-yaNr-BV5N-BXEc-oddH-ucsgyx', 'scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e', 'scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499845 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.499854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39', 'scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499870 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.499881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0', 'dm-uuid-LVM-F4n5dWigBqQIu532mQIWDLNYgUVJ3BiW6X8R8cxS1h8GruTxaNBrDSP8BCYV40NR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582', 'dm-uuid-LVM-xQmpgel33ejVPKRtIAxG6GhkzWbexzdvAlfpdstTkLoDf6WgX3pw0feGhHV3cgko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499910 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WpWuhZ-s2vi-68wW-8qq6-nf7r-XLSU-nWzndG', 'scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d', 'scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-F7fEoT-KAuk-uLWY-FeSF-tPj5-bvFy-p5511y', 'scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe', 'scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c', 'scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 00:59:59.499999 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500002 | orchestrator | 2026-04-07 00:59:59.500006 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 00:59:59.500011 | orchestrator | Tuesday 07 April 2026 00:50:10 +0000 (0:00:02.321) 0:00:39.140 ********* 2026-04-07 00:59:59.500017 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.500021 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.500025 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.500028 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500032 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500036 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500040 | orchestrator | 2026-04-07 00:59:59.500043 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 00:59:59.500047 | orchestrator | Tuesday 07 April 2026 00:50:12 +0000 (0:00:02.207) 0:00:41.348 ********* 2026-04-07 00:59:59.500051 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.500055 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.500058 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.500062 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500066 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500069 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500073 | orchestrator | 2026-04-07 00:59:59.500077 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 00:59:59.500081 | orchestrator | Tuesday 07 April 2026 00:50:13 +0000 (0:00:01.133) 0:00:42.481 ********* 2026-04-07 00:59:59.500085 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500088 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500092 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500096 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500100 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500104 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500107 | orchestrator | 2026-04-07 00:59:59.500111 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 00:59:59.500115 | orchestrator | Tuesday 07 April 2026 00:50:15 +0000 (0:00:01.483) 0:00:43.965 ********* 2026-04-07 00:59:59.500119 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500122 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500126 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500130 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500134 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500137 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500141 | orchestrator | 2026-04-07 00:59:59.500145 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 00:59:59.500151 | orchestrator | Tuesday 07 April 2026 00:50:15 +0000 (0:00:00.810) 0:00:44.776 ********* 2026-04-07 00:59:59.500155 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500159 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500162 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500166 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500170 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500174 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500177 | orchestrator | 2026-04-07 00:59:59.500181 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 00:59:59.500185 | orchestrator | Tuesday 07 April 2026 00:50:17 +0000 (0:00:01.754) 0:00:46.531 ********* 2026-04-07 00:59:59.500189 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500192 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500196 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500200 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500203 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500207 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500211 | orchestrator | 2026-04-07 00:59:59.500225 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 00:59:59.500232 | orchestrator | Tuesday 07 April 2026 00:50:19 +0000 (0:00:01.495) 0:00:48.026 ********* 2026-04-07 00:59:59.500236 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 00:59:59.500240 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.500244 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-07 00:59:59.500248 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-07 00:59:59.500252 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-07 00:59:59.500256 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 00:59:59.500260 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 00:59:59.500265 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 00:59:59.500270 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 00:59:59.500274 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-07 00:59:59.500281 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-07 00:59:59.500286 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 00:59:59.500290 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 00:59:59.500295 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-07 00:59:59.500299 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 00:59:59.500304 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 00:59:59.500308 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 00:59:59.500312 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 00:59:59.500317 | orchestrator | 2026-04-07 00:59:59.500321 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 00:59:59.500326 | orchestrator | Tuesday 07 April 2026 00:50:22 +0000 (0:00:03.539) 0:00:51.566 ********* 2026-04-07 00:59:59.500330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.500334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.500339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.500344 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500348 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 00:59:59.500353 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 00:59:59.500357 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 00:59:59.500361 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 00:59:59.500370 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 00:59:59.500380 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 00:59:59.500384 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 00:59:59.500393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 00:59:59.500397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 00:59:59.500402 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 00:59:59.500411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 00:59:59.500416 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 00:59:59.500420 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 00:59:59.500429 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 00:59:59.500434 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 00:59:59.500438 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500442 | orchestrator | 2026-04-07 00:59:59.500447 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 00:59:59.500451 | orchestrator | Tuesday 07 April 2026 00:50:23 +0000 (0:00:00.988) 0:00:52.554 ********* 2026-04-07 00:59:59.500456 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500460 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500464 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500469 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.500474 | orchestrator | 2026-04-07 00:59:59.500478 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 00:59:59.500483 | orchestrator | Tuesday 07 April 2026 00:50:25 +0000 (0:00:01.425) 0:00:53.979 ********* 2026-04-07 00:59:59.500488 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500492 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500497 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500501 | orchestrator | 2026-04-07 00:59:59.500505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 00:59:59.500510 | orchestrator | Tuesday 07 April 2026 00:50:25 +0000 (0:00:00.336) 0:00:54.316 ********* 2026-04-07 00:59:59.500514 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500519 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500523 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500528 | orchestrator | 2026-04-07 00:59:59.500532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 00:59:59.500537 | orchestrator | Tuesday 07 April 2026 00:50:25 +0000 (0:00:00.317) 0:00:54.634 ********* 2026-04-07 00:59:59.500541 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500546 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500550 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500555 | orchestrator | 2026-04-07 00:59:59.500559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 00:59:59.500564 | orchestrator | Tuesday 07 April 2026 00:50:26 +0000 (0:00:00.364) 0:00:54.998 ********* 2026-04-07 00:59:59.500569 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500573 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500577 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500581 | orchestrator | 2026-04-07 00:59:59.500584 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 00:59:59.500589 | orchestrator | Tuesday 07 April 2026 00:50:26 +0000 (0:00:00.533) 0:00:55.532 ********* 2026-04-07 00:59:59.500596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.500602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.500610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.500616 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500623 | orchestrator | 2026-04-07 00:59:59.500629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 00:59:59.500635 | orchestrator | Tuesday 07 April 2026 00:50:27 +0000 (0:00:00.406) 0:00:55.938 ********* 2026-04-07 00:59:59.500645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.500651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.500658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.500661 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500665 | orchestrator | 2026-04-07 00:59:59.500669 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 00:59:59.500673 | orchestrator | Tuesday 07 April 2026 00:50:27 +0000 (0:00:00.430) 0:00:56.369 ********* 2026-04-07 00:59:59.500676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.500681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.500687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.500692 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500696 | orchestrator | 2026-04-07 00:59:59.500700 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 00:59:59.500704 | orchestrator | Tuesday 07 April 2026 00:50:27 +0000 (0:00:00.492) 0:00:56.861 ********* 2026-04-07 00:59:59.500707 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500711 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500715 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500719 | orchestrator | 2026-04-07 00:59:59.500722 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 00:59:59.500726 | orchestrator | Tuesday 07 April 2026 00:50:28 +0000 (0:00:00.300) 0:00:57.162 ********* 2026-04-07 00:59:59.500730 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 00:59:59.500734 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 00:59:59.500737 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 00:59:59.500741 | orchestrator | 2026-04-07 00:59:59.500747 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 00:59:59.500751 | orchestrator | Tuesday 07 April 2026 00:50:29 +0000 (0:00:00.881) 0:00:58.044 ********* 2026-04-07 00:59:59.500755 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.500759 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.500763 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.500766 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 00:59:59.500770 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 00:59:59.500774 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 00:59:59.500778 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 00:59:59.500782 | orchestrator | 2026-04-07 00:59:59.500785 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 00:59:59.500789 | orchestrator | Tuesday 07 April 2026 00:50:30 +0000 (0:00:01.241) 0:00:59.285 ********* 2026-04-07 00:59:59.500793 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.500796 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.500800 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.500804 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 00:59:59.500808 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 00:59:59.500818 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 00:59:59.500823 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 00:59:59.500827 | orchestrator | 2026-04-07 00:59:59.500830 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.500834 | orchestrator | Tuesday 07 April 2026 00:50:33 +0000 (0:00:02.687) 0:01:01.972 ********* 2026-04-07 00:59:59.500838 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.500842 | orchestrator | 2026-04-07 00:59:59.500846 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.500850 | orchestrator | Tuesday 07 April 2026 00:50:35 +0000 (0:00:02.008) 0:01:03.981 ********* 2026-04-07 00:59:59.500853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.500857 | orchestrator | 2026-04-07 00:59:59.500861 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.500865 | orchestrator | Tuesday 07 April 2026 00:50:37 +0000 (0:00:02.182) 0:01:06.164 ********* 2026-04-07 00:59:59.500868 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.500872 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.500876 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.500879 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.500883 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.500887 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.500890 | orchestrator | 2026-04-07 00:59:59.500894 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.500898 | orchestrator | Tuesday 07 April 2026 00:50:39 +0000 (0:00:02.032) 0:01:08.196 ********* 2026-04-07 00:59:59.500902 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500905 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500909 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500913 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500916 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500922 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500926 | orchestrator | 2026-04-07 00:59:59.500930 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.500933 | orchestrator | Tuesday 07 April 2026 00:50:41 +0000 (0:00:01.866) 0:01:10.062 ********* 2026-04-07 00:59:59.500937 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500941 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500945 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500948 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500952 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500956 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500959 | orchestrator | 2026-04-07 00:59:59.500963 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.500967 | orchestrator | Tuesday 07 April 2026 00:50:42 +0000 (0:00:01.259) 0:01:11.321 ********* 2026-04-07 00:59:59.500971 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.500974 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.500978 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.500982 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.500985 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.500990 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.500997 | orchestrator | 2026-04-07 00:59:59.501000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.501004 | orchestrator | Tuesday 07 April 2026 00:50:43 +0000 (0:00:01.320) 0:01:12.642 ********* 2026-04-07 00:59:59.501008 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501012 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501018 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501022 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501026 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501029 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501033 | orchestrator | 2026-04-07 00:59:59.501037 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.501043 | orchestrator | Tuesday 07 April 2026 00:50:44 +0000 (0:00:00.725) 0:01:13.368 ********* 2026-04-07 00:59:59.501047 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501050 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501054 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501058 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501061 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501065 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501069 | orchestrator | 2026-04-07 00:59:59.501073 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.501076 | orchestrator | Tuesday 07 April 2026 00:50:45 +0000 (0:00:00.827) 0:01:14.195 ********* 2026-04-07 00:59:59.501080 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501084 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501088 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501091 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501095 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501099 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501102 | orchestrator | 2026-04-07 00:59:59.501106 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.501110 | orchestrator | Tuesday 07 April 2026 00:50:45 +0000 (0:00:00.516) 0:01:14.712 ********* 2026-04-07 00:59:59.501114 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501117 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501121 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501125 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501128 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501132 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501136 | orchestrator | 2026-04-07 00:59:59.501140 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.501143 | orchestrator | Tuesday 07 April 2026 00:50:47 +0000 (0:00:01.418) 0:01:16.131 ********* 2026-04-07 00:59:59.501147 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501151 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501155 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501158 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501162 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501165 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501169 | orchestrator | 2026-04-07 00:59:59.501173 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.501177 | orchestrator | Tuesday 07 April 2026 00:50:48 +0000 (0:00:01.306) 0:01:17.437 ********* 2026-04-07 00:59:59.501180 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501184 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501188 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501192 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501195 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501199 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501203 | orchestrator | 2026-04-07 00:59:59.501206 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.501210 | orchestrator | Tuesday 07 April 2026 00:50:49 +0000 (0:00:00.723) 0:01:18.160 ********* 2026-04-07 00:59:59.501214 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501229 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501233 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501237 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501241 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501244 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501248 | orchestrator | 2026-04-07 00:59:59.501254 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.501258 | orchestrator | Tuesday 07 April 2026 00:50:50 +0000 (0:00:00.774) 0:01:18.935 ********* 2026-04-07 00:59:59.501262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501265 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501269 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501273 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501276 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501280 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501284 | orchestrator | 2026-04-07 00:59:59.501287 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.501291 | orchestrator | Tuesday 07 April 2026 00:50:51 +0000 (0:00:00.936) 0:01:19.871 ********* 2026-04-07 00:59:59.501295 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501298 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501302 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501306 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501312 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501315 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501319 | orchestrator | 2026-04-07 00:59:59.501323 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.501327 | orchestrator | Tuesday 07 April 2026 00:50:51 +0000 (0:00:00.630) 0:01:20.502 ********* 2026-04-07 00:59:59.501331 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501334 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501338 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501342 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501345 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501349 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501353 | orchestrator | 2026-04-07 00:59:59.501356 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.501360 | orchestrator | Tuesday 07 April 2026 00:50:52 +0000 (0:00:00.755) 0:01:21.258 ********* 2026-04-07 00:59:59.501364 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501367 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501371 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501375 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501378 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501382 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501386 | orchestrator | 2026-04-07 00:59:59.501390 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.501393 | orchestrator | Tuesday 07 April 2026 00:50:53 +0000 (0:00:00.755) 0:01:22.014 ********* 2026-04-07 00:59:59.501397 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501401 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501405 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501408 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501412 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501416 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501419 | orchestrator | 2026-04-07 00:59:59.501425 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.501429 | orchestrator | Tuesday 07 April 2026 00:50:53 +0000 (0:00:00.659) 0:01:22.673 ********* 2026-04-07 00:59:59.501433 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501436 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501440 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501444 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501447 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501451 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501455 | orchestrator | 2026-04-07 00:59:59.501459 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.501463 | orchestrator | Tuesday 07 April 2026 00:50:54 +0000 (0:00:00.526) 0:01:23.199 ********* 2026-04-07 00:59:59.501466 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501473 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501477 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501480 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501484 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501488 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501492 | orchestrator | 2026-04-07 00:59:59.501495 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.501499 | orchestrator | Tuesday 07 April 2026 00:50:55 +0000 (0:00:00.865) 0:01:24.065 ********* 2026-04-07 00:59:59.501503 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501507 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501510 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501514 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501518 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501521 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501525 | orchestrator | 2026-04-07 00:59:59.501529 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-07 00:59:59.501533 | orchestrator | Tuesday 07 April 2026 00:50:56 +0000 (0:00:01.477) 0:01:25.542 ********* 2026-04-07 00:59:59.501537 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.501541 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.501545 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.501548 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.501552 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.501556 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.501560 | orchestrator | 2026-04-07 00:59:59.501564 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-07 00:59:59.501567 | orchestrator | Tuesday 07 April 2026 00:50:58 +0000 (0:00:01.643) 0:01:27.185 ********* 2026-04-07 00:59:59.501571 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.501575 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.501579 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.501582 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.501586 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.501590 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.501594 | orchestrator | 2026-04-07 00:59:59.501597 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-07 00:59:59.501601 | orchestrator | Tuesday 07 April 2026 00:51:01 +0000 (0:00:03.087) 0:01:30.273 ********* 2026-04-07 00:59:59.501605 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.501609 | orchestrator | 2026-04-07 00:59:59.501613 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-07 00:59:59.501616 | orchestrator | Tuesday 07 April 2026 00:51:02 +0000 (0:00:01.192) 0:01:31.466 ********* 2026-04-07 00:59:59.501620 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501624 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501628 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501632 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501635 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501639 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501643 | orchestrator | 2026-04-07 00:59:59.501647 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-07 00:59:59.501650 | orchestrator | Tuesday 07 April 2026 00:51:03 +0000 (0:00:00.566) 0:01:32.032 ********* 2026-04-07 00:59:59.501654 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501658 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501661 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501665 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501671 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501674 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501678 | orchestrator | 2026-04-07 00:59:59.501682 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-07 00:59:59.501688 | orchestrator | Tuesday 07 April 2026 00:51:03 +0000 (0:00:00.798) 0:01:32.831 ********* 2026-04-07 00:59:59.501692 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501695 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501699 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501703 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501707 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501710 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501715 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 00:59:59.501722 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501726 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501729 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501733 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501739 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 00:59:59.501743 | orchestrator | 2026-04-07 00:59:59.501746 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-07 00:59:59.501750 | orchestrator | Tuesday 07 April 2026 00:51:05 +0000 (0:00:01.351) 0:01:34.183 ********* 2026-04-07 00:59:59.501754 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.501757 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.501761 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.501765 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.501769 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.501772 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.501776 | orchestrator | 2026-04-07 00:59:59.501780 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-07 00:59:59.501784 | orchestrator | Tuesday 07 April 2026 00:51:06 +0000 (0:00:01.168) 0:01:35.351 ********* 2026-04-07 00:59:59.501787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501791 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501795 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501798 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501802 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501806 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501809 | orchestrator | 2026-04-07 00:59:59.501813 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-07 00:59:59.501817 | orchestrator | Tuesday 07 April 2026 00:51:07 +0000 (0:00:00.653) 0:01:36.004 ********* 2026-04-07 00:59:59.501821 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501824 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501828 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501832 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501835 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501839 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501843 | orchestrator | 2026-04-07 00:59:59.501847 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-07 00:59:59.501850 | orchestrator | Tuesday 07 April 2026 00:51:08 +0000 (0:00:00.990) 0:01:36.995 ********* 2026-04-07 00:59:59.501854 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501858 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501861 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501865 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.501869 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.501875 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.501878 | orchestrator | 2026-04-07 00:59:59.501882 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-07 00:59:59.501886 | orchestrator | Tuesday 07 April 2026 00:51:08 +0000 (0:00:00.686) 0:01:37.682 ********* 2026-04-07 00:59:59.501890 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.501894 | orchestrator | 2026-04-07 00:59:59.501897 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-07 00:59:59.501901 | orchestrator | Tuesday 07 April 2026 00:51:10 +0000 (0:00:01.307) 0:01:38.989 ********* 2026-04-07 00:59:59.501905 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.501908 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.501912 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.501916 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.501920 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.501923 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.501927 | orchestrator | 2026-04-07 00:59:59.501931 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-07 00:59:59.501935 | orchestrator | Tuesday 07 April 2026 00:52:14 +0000 (0:01:04.202) 0:02:43.191 ********* 2026-04-07 00:59:59.501938 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.501942 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.501946 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.501949 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.501953 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.501959 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.501963 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.501967 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.501970 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.501974 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.501978 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.501981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.501985 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.501989 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.501993 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.501996 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502000 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.502004 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.502007 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.502037 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502044 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 00:59:59.502051 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 00:59:59.502055 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 00:59:59.502058 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502062 | orchestrator | 2026-04-07 00:59:59.502066 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-07 00:59:59.502070 | orchestrator | Tuesday 07 April 2026 00:52:15 +0000 (0:00:00.822) 0:02:44.014 ********* 2026-04-07 00:59:59.502073 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502081 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502084 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502088 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502092 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502096 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502099 | orchestrator | 2026-04-07 00:59:59.502103 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-07 00:59:59.502107 | orchestrator | Tuesday 07 April 2026 00:52:15 +0000 (0:00:00.842) 0:02:44.857 ********* 2026-04-07 00:59:59.502111 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502114 | orchestrator | 2026-04-07 00:59:59.502118 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-07 00:59:59.502122 | orchestrator | Tuesday 07 April 2026 00:52:16 +0000 (0:00:00.401) 0:02:45.258 ********* 2026-04-07 00:59:59.502125 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502129 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502133 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502137 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502140 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502144 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502148 | orchestrator | 2026-04-07 00:59:59.502151 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-07 00:59:59.502155 | orchestrator | Tuesday 07 April 2026 00:52:17 +0000 (0:00:00.693) 0:02:45.952 ********* 2026-04-07 00:59:59.502159 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502163 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502166 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502170 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502174 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502177 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502181 | orchestrator | 2026-04-07 00:59:59.502185 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-07 00:59:59.502189 | orchestrator | Tuesday 07 April 2026 00:52:18 +0000 (0:00:01.052) 0:02:47.005 ********* 2026-04-07 00:59:59.502192 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502196 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502200 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502203 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502207 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502211 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502215 | orchestrator | 2026-04-07 00:59:59.502232 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-07 00:59:59.502238 | orchestrator | Tuesday 07 April 2026 00:52:19 +0000 (0:00:00.965) 0:02:47.970 ********* 2026-04-07 00:59:59.502245 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.502251 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.502257 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.502264 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.502268 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.502271 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.502275 | orchestrator | 2026-04-07 00:59:59.502279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-07 00:59:59.502283 | orchestrator | Tuesday 07 April 2026 00:52:21 +0000 (0:00:02.157) 0:02:50.127 ********* 2026-04-07 00:59:59.502286 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.502290 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.502294 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.502298 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.502301 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.502305 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.502309 | orchestrator | 2026-04-07 00:59:59.502312 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-07 00:59:59.502316 | orchestrator | Tuesday 07 April 2026 00:52:22 +0000 (0:00:01.176) 0:02:51.304 ********* 2026-04-07 00:59:59.502326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1, testbed-node-2, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.502330 | orchestrator | 2026-04-07 00:59:59.502334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-07 00:59:59.502338 | orchestrator | Tuesday 07 April 2026 00:52:23 +0000 (0:00:01.516) 0:02:52.820 ********* 2026-04-07 00:59:59.502342 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502346 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502349 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502353 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502357 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502361 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502364 | orchestrator | 2026-04-07 00:59:59.502368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-07 00:59:59.502372 | orchestrator | Tuesday 07 April 2026 00:52:24 +0000 (0:00:00.650) 0:02:53.471 ********* 2026-04-07 00:59:59.502376 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502379 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502383 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502387 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502390 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502394 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502398 | orchestrator | 2026-04-07 00:59:59.502402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-07 00:59:59.502407 | orchestrator | Tuesday 07 April 2026 00:52:25 +0000 (0:00:00.844) 0:02:54.316 ********* 2026-04-07 00:59:59.502413 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502417 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502421 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502424 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502428 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502439 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502443 | orchestrator | 2026-04-07 00:59:59.502447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-07 00:59:59.502451 | orchestrator | Tuesday 07 April 2026 00:52:26 +0000 (0:00:00.636) 0:02:54.952 ********* 2026-04-07 00:59:59.502454 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502458 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502462 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502465 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502469 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502477 | orchestrator | 2026-04-07 00:59:59.502481 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-07 00:59:59.502484 | orchestrator | Tuesday 07 April 2026 00:52:27 +0000 (0:00:00.972) 0:02:55.924 ********* 2026-04-07 00:59:59.502488 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502492 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502495 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502499 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502503 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502506 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502510 | orchestrator | 2026-04-07 00:59:59.502514 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-07 00:59:59.502518 | orchestrator | Tuesday 07 April 2026 00:52:27 +0000 (0:00:00.763) 0:02:56.688 ********* 2026-04-07 00:59:59.502521 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502525 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502529 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502533 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502550 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502557 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502561 | orchestrator | 2026-04-07 00:59:59.502564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-07 00:59:59.502568 | orchestrator | Tuesday 07 April 2026 00:52:28 +0000 (0:00:00.779) 0:02:57.467 ********* 2026-04-07 00:59:59.502572 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502576 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502579 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502587 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502591 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502594 | orchestrator | 2026-04-07 00:59:59.502598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-07 00:59:59.502602 | orchestrator | Tuesday 07 April 2026 00:52:29 +0000 (0:00:00.573) 0:02:58.041 ********* 2026-04-07 00:59:59.502605 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.502609 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.502613 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.502617 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.502620 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.502624 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.502628 | orchestrator | 2026-04-07 00:59:59.502632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-07 00:59:59.502635 | orchestrator | Tuesday 07 April 2026 00:52:30 +0000 (0:00:00.888) 0:02:58.930 ********* 2026-04-07 00:59:59.502639 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.502643 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.502647 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.502650 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.502654 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.502658 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.502661 | orchestrator | 2026-04-07 00:59:59.502665 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-07 00:59:59.502669 | orchestrator | Tuesday 07 April 2026 00:52:31 +0000 (0:00:01.472) 0:03:00.402 ********* 2026-04-07 00:59:59.502673 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.502677 | orchestrator | 2026-04-07 00:59:59.502680 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-07 00:59:59.502684 | orchestrator | Tuesday 07 April 2026 00:52:32 +0000 (0:00:00.990) 0:03:01.393 ********* 2026-04-07 00:59:59.502690 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-07 00:59:59.502694 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-07 00:59:59.502698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502701 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-07 00:59:59.502705 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-07 00:59:59.502709 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502712 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-07 00:59:59.502716 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502720 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502724 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-07 00:59:59.502727 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502735 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502746 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-07 00:59:59.502754 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502758 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502762 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502765 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502772 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502776 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502779 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-07 00:59:59.502783 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502787 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502801 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-07 00:59:59.502805 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502809 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502813 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502816 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502820 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502824 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502827 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502831 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-07 00:59:59.502835 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502839 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502846 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502850 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502857 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-07 00:59:59.502861 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502864 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502868 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502872 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502875 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502879 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-07 00:59:59.502883 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.502887 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502890 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502894 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502902 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.502905 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-07 00:59:59.502909 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502923 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502928 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.502932 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 00:59:59.502936 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502939 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-07 00:59:59.502943 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502947 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502951 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502954 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 00:59:59.502958 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.502962 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-07 00:59:59.502966 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.502970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.502973 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.502977 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 00:59:59.502981 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.502984 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.502988 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.502994 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.502998 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 00:59:59.503002 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.503006 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.503010 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.503013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.503017 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 00:59:59.503021 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-07 00:59:59.503025 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-07 00:59:59.503028 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-07 00:59:59.503032 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 00:59:59.503036 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-07 00:59:59.503040 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-07 00:59:59.503044 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-07 00:59:59.503047 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-07 00:59:59.503051 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-07 00:59:59.503055 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-07 00:59:59.503058 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-07 00:59:59.503062 | orchestrator | 2026-04-07 00:59:59.503066 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-07 00:59:59.503070 | orchestrator | Tuesday 07 April 2026 00:52:39 +0000 (0:00:07.233) 0:03:08.627 ********* 2026-04-07 00:59:59.503078 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503082 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503085 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503089 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.503093 | orchestrator | 2026-04-07 00:59:59.503097 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-07 00:59:59.503101 | orchestrator | Tuesday 07 April 2026 00:52:40 +0000 (0:00:01.005) 0:03:09.632 ********* 2026-04-07 00:59:59.503105 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503109 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503113 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503121 | orchestrator | 2026-04-07 00:59:59.503126 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-07 00:59:59.503132 | orchestrator | Tuesday 07 April 2026 00:52:41 +0000 (0:00:00.732) 0:03:10.365 ********* 2026-04-07 00:59:59.503136 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503140 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503144 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503148 | orchestrator | 2026-04-07 00:59:59.503151 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-07 00:59:59.503157 | orchestrator | Tuesday 07 April 2026 00:52:42 +0000 (0:00:01.368) 0:03:11.733 ********* 2026-04-07 00:59:59.503161 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503165 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503169 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503173 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503176 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503180 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503184 | orchestrator | 2026-04-07 00:59:59.503188 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-07 00:59:59.503191 | orchestrator | Tuesday 07 April 2026 00:52:43 +0000 (0:00:00.652) 0:03:12.386 ********* 2026-04-07 00:59:59.503195 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503199 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503203 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503207 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503211 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503214 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503246 | orchestrator | 2026-04-07 00:59:59.503250 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-07 00:59:59.503254 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:00.679) 0:03:13.066 ********* 2026-04-07 00:59:59.503258 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503262 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503266 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503270 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503273 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503277 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503281 | orchestrator | 2026-04-07 00:59:59.503285 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-07 00:59:59.503289 | orchestrator | Tuesday 07 April 2026 00:52:44 +0000 (0:00:00.429) 0:03:13.495 ********* 2026-04-07 00:59:59.503299 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503307 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503311 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503314 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503318 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503322 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503325 | orchestrator | 2026-04-07 00:59:59.503329 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-07 00:59:59.503333 | orchestrator | Tuesday 07 April 2026 00:52:45 +0000 (0:00:00.679) 0:03:14.174 ********* 2026-04-07 00:59:59.503337 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503340 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503344 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503348 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503351 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503355 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503359 | orchestrator | 2026-04-07 00:59:59.503363 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-07 00:59:59.503366 | orchestrator | Tuesday 07 April 2026 00:52:45 +0000 (0:00:00.623) 0:03:14.798 ********* 2026-04-07 00:59:59.503370 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503374 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503378 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503381 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503385 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503389 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503392 | orchestrator | 2026-04-07 00:59:59.503396 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-07 00:59:59.503400 | orchestrator | Tuesday 07 April 2026 00:52:46 +0000 (0:00:00.434) 0:03:15.232 ********* 2026-04-07 00:59:59.503404 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503408 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503411 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503415 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503422 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503427 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503430 | orchestrator | 2026-04-07 00:59:59.503434 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-07 00:59:59.503438 | orchestrator | Tuesday 07 April 2026 00:52:46 +0000 (0:00:00.594) 0:03:15.826 ********* 2026-04-07 00:59:59.503441 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503445 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503449 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503453 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503457 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503460 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503464 | orchestrator | 2026-04-07 00:59:59.503468 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-07 00:59:59.503472 | orchestrator | Tuesday 07 April 2026 00:52:47 +0000 (0:00:00.558) 0:03:16.384 ********* 2026-04-07 00:59:59.503475 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503479 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503483 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503486 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503490 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503494 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503498 | orchestrator | 2026-04-07 00:59:59.503501 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-07 00:59:59.503505 | orchestrator | Tuesday 07 April 2026 00:52:49 +0000 (0:00:02.245) 0:03:18.630 ********* 2026-04-07 00:59:59.503509 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503513 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503519 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503523 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503526 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503530 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503534 | orchestrator | 2026-04-07 00:59:59.503538 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-07 00:59:59.503541 | orchestrator | Tuesday 07 April 2026 00:52:50 +0000 (0:00:00.664) 0:03:19.294 ********* 2026-04-07 00:59:59.503545 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503549 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503555 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503559 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503562 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503566 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503570 | orchestrator | 2026-04-07 00:59:59.503573 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-07 00:59:59.503577 | orchestrator | Tuesday 07 April 2026 00:52:51 +0000 (0:00:00.819) 0:03:20.113 ********* 2026-04-07 00:59:59.503581 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503585 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503589 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503592 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503598 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503604 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503607 | orchestrator | 2026-04-07 00:59:59.503611 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-07 00:59:59.503615 | orchestrator | Tuesday 07 April 2026 00:52:51 +0000 (0:00:00.627) 0:03:20.740 ********* 2026-04-07 00:59:59.503619 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503623 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503626 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503630 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503634 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503638 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.503642 | orchestrator | 2026-04-07 00:59:59.503648 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-07 00:59:59.503652 | orchestrator | Tuesday 07 April 2026 00:52:52 +0000 (0:00:00.976) 0:03:21.717 ********* 2026-04-07 00:59:59.503655 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503659 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503663 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503668 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-07 00:59:59.503673 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-07 00:59:59.503677 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-07 00:59:59.503681 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-07 00:59:59.503688 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503691 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503695 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-07 00:59:59.503699 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-07 00:59:59.503703 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503707 | orchestrator | 2026-04-07 00:59:59.503711 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-07 00:59:59.503715 | orchestrator | Tuesday 07 April 2026 00:52:53 +0000 (0:00:00.650) 0:03:22.367 ********* 2026-04-07 00:59:59.503719 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503723 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503726 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503730 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503734 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503737 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503741 | orchestrator | 2026-04-07 00:59:59.503745 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-07 00:59:59.503749 | orchestrator | Tuesday 07 April 2026 00:52:54 +0000 (0:00:00.705) 0:03:23.073 ********* 2026-04-07 00:59:59.503752 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503756 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503760 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503764 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503768 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503771 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503775 | orchestrator | 2026-04-07 00:59:59.503779 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 00:59:59.503783 | orchestrator | Tuesday 07 April 2026 00:52:54 +0000 (0:00:00.553) 0:03:23.626 ********* 2026-04-07 00:59:59.503786 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503790 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503794 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503798 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503801 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503805 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503809 | orchestrator | 2026-04-07 00:59:59.503812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 00:59:59.503816 | orchestrator | Tuesday 07 April 2026 00:52:55 +0000 (0:00:00.779) 0:03:24.405 ********* 2026-04-07 00:59:59.503820 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503824 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503827 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503831 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503835 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503839 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503842 | orchestrator | 2026-04-07 00:59:59.503846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 00:59:59.503850 | orchestrator | Tuesday 07 April 2026 00:52:56 +0000 (0:00:00.579) 0:03:24.984 ********* 2026-04-07 00:59:59.503854 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503859 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503866 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503870 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.503873 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.503877 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.503881 | orchestrator | 2026-04-07 00:59:59.503885 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 00:59:59.503889 | orchestrator | Tuesday 07 April 2026 00:52:56 +0000 (0:00:00.724) 0:03:25.709 ********* 2026-04-07 00:59:59.503892 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503896 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.503900 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.503903 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.503907 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.503911 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.503915 | orchestrator | 2026-04-07 00:59:59.503918 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 00:59:59.503961 | orchestrator | Tuesday 07 April 2026 00:52:57 +0000 (0:00:00.909) 0:03:26.619 ********* 2026-04-07 00:59:59.503972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 00:59:59.503976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 00:59:59.503980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 00:59:59.503983 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.503987 | orchestrator | 2026-04-07 00:59:59.503991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 00:59:59.503995 | orchestrator | Tuesday 07 April 2026 00:52:58 +0000 (0:00:00.703) 0:03:27.322 ********* 2026-04-07 00:59:59.503999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 00:59:59.504002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 00:59:59.504006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 00:59:59.504010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504013 | orchestrator | 2026-04-07 00:59:59.504017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 00:59:59.504021 | orchestrator | Tuesday 07 April 2026 00:52:59 +0000 (0:00:00.639) 0:03:27.962 ********* 2026-04-07 00:59:59.504025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 00:59:59.504028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 00:59:59.504032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 00:59:59.504036 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504039 | orchestrator | 2026-04-07 00:59:59.504043 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 00:59:59.504047 | orchestrator | Tuesday 07 April 2026 00:53:00 +0000 (0:00:00.930) 0:03:28.892 ********* 2026-04-07 00:59:59.504051 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504054 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504058 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504062 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.504066 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.504069 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.504073 | orchestrator | 2026-04-07 00:59:59.504077 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 00:59:59.504081 | orchestrator | Tuesday 07 April 2026 00:53:00 +0000 (0:00:00.760) 0:03:29.653 ********* 2026-04-07 00:59:59.504084 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-07 00:59:59.504088 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-07 00:59:59.504092 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504096 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504100 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-07 00:59:59.504104 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504107 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 00:59:59.504114 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 00:59:59.504118 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 00:59:59.504121 | orchestrator | 2026-04-07 00:59:59.504125 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-07 00:59:59.504129 | orchestrator | Tuesday 07 April 2026 00:53:03 +0000 (0:00:02.379) 0:03:32.032 ********* 2026-04-07 00:59:59.504133 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.504138 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.504142 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.504146 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.504149 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.504153 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.504157 | orchestrator | 2026-04-07 00:59:59.504161 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.504164 | orchestrator | Tuesday 07 April 2026 00:53:06 +0000 (0:00:02.904) 0:03:34.937 ********* 2026-04-07 00:59:59.504168 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.504172 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.504176 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.504179 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.504183 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.504187 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.504190 | orchestrator | 2026-04-07 00:59:59.504194 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-07 00:59:59.504198 | orchestrator | Tuesday 07 April 2026 00:53:07 +0000 (0:00:01.102) 0:03:36.039 ********* 2026-04-07 00:59:59.504202 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504205 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.504209 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.504213 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.504227 | orchestrator | 2026-04-07 00:59:59.504234 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-07 00:59:59.504240 | orchestrator | Tuesday 07 April 2026 00:53:08 +0000 (0:00:01.219) 0:03:37.259 ********* 2026-04-07 00:59:59.504248 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.504254 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.504260 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.504270 | orchestrator | 2026-04-07 00:59:59.504276 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-07 00:59:59.504280 | orchestrator | Tuesday 07 April 2026 00:53:08 +0000 (0:00:00.327) 0:03:37.586 ********* 2026-04-07 00:59:59.504283 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.504287 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.504293 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.504299 | orchestrator | 2026-04-07 00:59:59.504303 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-07 00:59:59.504306 | orchestrator | Tuesday 07 April 2026 00:53:09 +0000 (0:00:01.199) 0:03:38.786 ********* 2026-04-07 00:59:59.504310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.504314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.504317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.504321 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504325 | orchestrator | 2026-04-07 00:59:59.504329 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-07 00:59:59.504333 | orchestrator | Tuesday 07 April 2026 00:53:10 +0000 (0:00:00.825) 0:03:39.612 ********* 2026-04-07 00:59:59.504336 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.504340 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.504344 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.504347 | orchestrator | 2026-04-07 00:59:59.504351 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-07 00:59:59.504358 | orchestrator | Tuesday 07 April 2026 00:53:11 +0000 (0:00:00.441) 0:03:40.054 ********* 2026-04-07 00:59:59.504362 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504366 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504370 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504373 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.504377 | orchestrator | 2026-04-07 00:59:59.504381 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-07 00:59:59.504385 | orchestrator | Tuesday 07 April 2026 00:53:11 +0000 (0:00:00.704) 0:03:40.759 ********* 2026-04-07 00:59:59.504388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.504392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.504396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.504400 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504403 | orchestrator | 2026-04-07 00:59:59.504407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-07 00:59:59.504411 | orchestrator | Tuesday 07 April 2026 00:53:12 +0000 (0:00:00.363) 0:03:41.122 ********* 2026-04-07 00:59:59.504415 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504418 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.504422 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.504426 | orchestrator | 2026-04-07 00:59:59.504430 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-07 00:59:59.504433 | orchestrator | Tuesday 07 April 2026 00:53:12 +0000 (0:00:00.405) 0:03:41.528 ********* 2026-04-07 00:59:59.504437 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504441 | orchestrator | 2026-04-07 00:59:59.504445 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-07 00:59:59.504448 | orchestrator | Tuesday 07 April 2026 00:53:12 +0000 (0:00:00.195) 0:03:41.724 ********* 2026-04-07 00:59:59.504452 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504456 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.504460 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.504464 | orchestrator | 2026-04-07 00:59:59.504468 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-07 00:59:59.504471 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.273) 0:03:41.997 ********* 2026-04-07 00:59:59.504475 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504479 | orchestrator | 2026-04-07 00:59:59.504483 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-07 00:59:59.504486 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.203) 0:03:42.201 ********* 2026-04-07 00:59:59.504490 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504494 | orchestrator | 2026-04-07 00:59:59.504500 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-07 00:59:59.504504 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.183) 0:03:42.385 ********* 2026-04-07 00:59:59.504508 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504512 | orchestrator | 2026-04-07 00:59:59.504515 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-07 00:59:59.504519 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.099) 0:03:42.484 ********* 2026-04-07 00:59:59.504523 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504527 | orchestrator | 2026-04-07 00:59:59.504531 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-07 00:59:59.504538 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.193) 0:03:42.678 ********* 2026-04-07 00:59:59.504542 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504546 | orchestrator | 2026-04-07 00:59:59.504549 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-07 00:59:59.504553 | orchestrator | Tuesday 07 April 2026 00:53:13 +0000 (0:00:00.173) 0:03:42.851 ********* 2026-04-07 00:59:59.504560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.504564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.504567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.504571 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504575 | orchestrator | 2026-04-07 00:59:59.504579 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-07 00:59:59.504585 | orchestrator | Tuesday 07 April 2026 00:53:14 +0000 (0:00:00.531) 0:03:43.383 ********* 2026-04-07 00:59:59.504592 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504601 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.504608 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.504614 | orchestrator | 2026-04-07 00:59:59.504621 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-07 00:59:59.504626 | orchestrator | Tuesday 07 April 2026 00:53:14 +0000 (0:00:00.447) 0:03:43.830 ********* 2026-04-07 00:59:59.504632 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504638 | orchestrator | 2026-04-07 00:59:59.504644 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-07 00:59:59.504650 | orchestrator | Tuesday 07 April 2026 00:53:15 +0000 (0:00:00.243) 0:03:44.074 ********* 2026-04-07 00:59:59.504656 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504662 | orchestrator | 2026-04-07 00:59:59.504668 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-07 00:59:59.504673 | orchestrator | Tuesday 07 April 2026 00:53:15 +0000 (0:00:00.204) 0:03:44.278 ********* 2026-04-07 00:59:59.504680 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504685 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504691 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.504704 | orchestrator | 2026-04-07 00:59:59.504710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-07 00:59:59.504716 | orchestrator | Tuesday 07 April 2026 00:53:16 +0000 (0:00:01.055) 0:03:45.334 ********* 2026-04-07 00:59:59.504722 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.504727 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.504733 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.504739 | orchestrator | 2026-04-07 00:59:59.504745 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-07 00:59:59.504752 | orchestrator | Tuesday 07 April 2026 00:53:16 +0000 (0:00:00.374) 0:03:45.709 ********* 2026-04-07 00:59:59.504758 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.504765 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.504770 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.504774 | orchestrator | 2026-04-07 00:59:59.504778 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-07 00:59:59.504782 | orchestrator | Tuesday 07 April 2026 00:53:18 +0000 (0:00:01.221) 0:03:46.931 ********* 2026-04-07 00:59:59.504785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.504789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.504793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.504797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504800 | orchestrator | 2026-04-07 00:59:59.504804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-07 00:59:59.504808 | orchestrator | Tuesday 07 April 2026 00:53:18 +0000 (0:00:00.674) 0:03:47.605 ********* 2026-04-07 00:59:59.504812 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.504815 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.504819 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.504823 | orchestrator | 2026-04-07 00:59:59.504826 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-07 00:59:59.504835 | orchestrator | Tuesday 07 April 2026 00:53:18 +0000 (0:00:00.248) 0:03:47.854 ********* 2026-04-07 00:59:59.504838 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504842 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504846 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.504853 | orchestrator | 2026-04-07 00:59:59.504857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-07 00:59:59.504864 | orchestrator | Tuesday 07 April 2026 00:53:19 +0000 (0:00:00.817) 0:03:48.671 ********* 2026-04-07 00:59:59.504869 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.504873 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.504876 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.504880 | orchestrator | 2026-04-07 00:59:59.504884 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-07 00:59:59.504888 | orchestrator | Tuesday 07 April 2026 00:53:20 +0000 (0:00:00.279) 0:03:48.950 ********* 2026-04-07 00:59:59.504894 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.504898 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.504902 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.504905 | orchestrator | 2026-04-07 00:59:59.504909 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-07 00:59:59.504913 | orchestrator | Tuesday 07 April 2026 00:53:21 +0000 (0:00:01.440) 0:03:50.391 ********* 2026-04-07 00:59:59.504917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.504920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.504924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.504928 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504932 | orchestrator | 2026-04-07 00:59:59.504936 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-07 00:59:59.504939 | orchestrator | Tuesday 07 April 2026 00:53:22 +0000 (0:00:00.480) 0:03:50.871 ********* 2026-04-07 00:59:59.504943 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.504950 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.504954 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.504958 | orchestrator | 2026-04-07 00:59:59.504962 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-07 00:59:59.504966 | orchestrator | Tuesday 07 April 2026 00:53:22 +0000 (0:00:00.328) 0:03:51.200 ********* 2026-04-07 00:59:59.504970 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.504977 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.504981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.504984 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.504988 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.504992 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.504996 | orchestrator | 2026-04-07 00:59:59.505003 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-07 00:59:59.505007 | orchestrator | Tuesday 07 April 2026 00:53:22 +0000 (0:00:00.519) 0:03:51.720 ********* 2026-04-07 00:59:59.505010 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.505014 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.505018 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.505021 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.505025 | orchestrator | 2026-04-07 00:59:59.505029 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-07 00:59:59.505033 | orchestrator | Tuesday 07 April 2026 00:53:23 +0000 (0:00:00.928) 0:03:52.648 ********* 2026-04-07 00:59:59.505037 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505040 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505044 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505051 | orchestrator | 2026-04-07 00:59:59.505055 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-07 00:59:59.505058 | orchestrator | Tuesday 07 April 2026 00:53:24 +0000 (0:00:00.268) 0:03:52.916 ********* 2026-04-07 00:59:59.505062 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.505066 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.505070 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.505074 | orchestrator | 2026-04-07 00:59:59.505078 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-07 00:59:59.505081 | orchestrator | Tuesday 07 April 2026 00:53:25 +0000 (0:00:01.556) 0:03:54.473 ********* 2026-04-07 00:59:59.505085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.505089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.505093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.505096 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505100 | orchestrator | 2026-04-07 00:59:59.505104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-07 00:59:59.505108 | orchestrator | Tuesday 07 April 2026 00:53:26 +0000 (0:00:00.579) 0:03:55.052 ********* 2026-04-07 00:59:59.505114 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505119 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505123 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505127 | orchestrator | 2026-04-07 00:59:59.505130 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-07 00:59:59.505134 | orchestrator | 2026-04-07 00:59:59.505138 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.505142 | orchestrator | Tuesday 07 April 2026 00:53:26 +0000 (0:00:00.528) 0:03:55.581 ********* 2026-04-07 00:59:59.505145 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.505149 | orchestrator | 2026-04-07 00:59:59.505153 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.505157 | orchestrator | Tuesday 07 April 2026 00:53:27 +0000 (0:00:00.590) 0:03:56.172 ********* 2026-04-07 00:59:59.505160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-07 00:59:59.505164 | orchestrator | 2026-04-07 00:59:59.505168 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.505172 | orchestrator | Tuesday 07 April 2026 00:53:27 +0000 (0:00:00.655) 0:03:56.827 ********* 2026-04-07 00:59:59.505175 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505179 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505183 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505187 | orchestrator | 2026-04-07 00:59:59.505191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.505194 | orchestrator | Tuesday 07 April 2026 00:53:28 +0000 (0:00:00.721) 0:03:57.548 ********* 2026-04-07 00:59:59.505198 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505202 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505206 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505209 | orchestrator | 2026-04-07 00:59:59.505213 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.505308 | orchestrator | Tuesday 07 April 2026 00:53:29 +0000 (0:00:00.416) 0:03:57.965 ********* 2026-04-07 00:59:59.505320 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505324 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505328 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505332 | orchestrator | 2026-04-07 00:59:59.505336 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.505339 | orchestrator | Tuesday 07 April 2026 00:53:29 +0000 (0:00:00.382) 0:03:58.347 ********* 2026-04-07 00:59:59.505343 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505351 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505355 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505359 | orchestrator | 2026-04-07 00:59:59.505363 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.505366 | orchestrator | Tuesday 07 April 2026 00:53:29 +0000 (0:00:00.337) 0:03:58.685 ********* 2026-04-07 00:59:59.505370 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505374 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505378 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505382 | orchestrator | 2026-04-07 00:59:59.505386 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.505389 | orchestrator | Tuesday 07 April 2026 00:53:30 +0000 (0:00:00.686) 0:03:59.372 ********* 2026-04-07 00:59:59.505393 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505397 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505400 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505404 | orchestrator | 2026-04-07 00:59:59.505408 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.505411 | orchestrator | Tuesday 07 April 2026 00:53:30 +0000 (0:00:00.449) 0:03:59.821 ********* 2026-04-07 00:59:59.505415 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505419 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505423 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505426 | orchestrator | 2026-04-07 00:59:59.505435 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.505439 | orchestrator | Tuesday 07 April 2026 00:53:31 +0000 (0:00:00.286) 0:04:00.108 ********* 2026-04-07 00:59:59.505443 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505447 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505450 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505454 | orchestrator | 2026-04-07 00:59:59.505458 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.505462 | orchestrator | Tuesday 07 April 2026 00:53:31 +0000 (0:00:00.642) 0:04:00.750 ********* 2026-04-07 00:59:59.505465 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505469 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505473 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505477 | orchestrator | 2026-04-07 00:59:59.505480 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.505484 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.626) 0:04:01.377 ********* 2026-04-07 00:59:59.505488 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505492 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505496 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505499 | orchestrator | 2026-04-07 00:59:59.505503 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.505507 | orchestrator | Tuesday 07 April 2026 00:53:32 +0000 (0:00:00.272) 0:04:01.649 ********* 2026-04-07 00:59:59.505511 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505515 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505518 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505522 | orchestrator | 2026-04-07 00:59:59.505526 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.505530 | orchestrator | Tuesday 07 April 2026 00:53:33 +0000 (0:00:00.487) 0:04:02.137 ********* 2026-04-07 00:59:59.505533 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505537 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505545 | orchestrator | 2026-04-07 00:59:59.505549 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.505553 | orchestrator | Tuesday 07 April 2026 00:53:33 +0000 (0:00:00.252) 0:04:02.390 ********* 2026-04-07 00:59:59.505556 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505560 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505564 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505570 | orchestrator | 2026-04-07 00:59:59.505574 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.505578 | orchestrator | Tuesday 07 April 2026 00:53:33 +0000 (0:00:00.256) 0:04:02.646 ********* 2026-04-07 00:59:59.505582 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505585 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505589 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505593 | orchestrator | 2026-04-07 00:59:59.505597 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.505600 | orchestrator | Tuesday 07 April 2026 00:53:34 +0000 (0:00:00.259) 0:04:02.906 ********* 2026-04-07 00:59:59.505604 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505608 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505612 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505615 | orchestrator | 2026-04-07 00:59:59.505619 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.505623 | orchestrator | Tuesday 07 April 2026 00:53:34 +0000 (0:00:00.413) 0:04:03.319 ********* 2026-04-07 00:59:59.505627 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505630 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.505634 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.505638 | orchestrator | 2026-04-07 00:59:59.505642 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.505645 | orchestrator | Tuesday 07 April 2026 00:53:34 +0000 (0:00:00.295) 0:04:03.615 ********* 2026-04-07 00:59:59.505649 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505653 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505664 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505668 | orchestrator | 2026-04-07 00:59:59.505671 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.505675 | orchestrator | Tuesday 07 April 2026 00:53:35 +0000 (0:00:00.416) 0:04:04.031 ********* 2026-04-07 00:59:59.505681 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505685 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505689 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505693 | orchestrator | 2026-04-07 00:59:59.505696 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.505700 | orchestrator | Tuesday 07 April 2026 00:53:35 +0000 (0:00:00.434) 0:04:04.466 ********* 2026-04-07 00:59:59.505704 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505708 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505711 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505715 | orchestrator | 2026-04-07 00:59:59.505719 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-07 00:59:59.505723 | orchestrator | Tuesday 07 April 2026 00:53:36 +0000 (0:00:00.835) 0:04:05.301 ********* 2026-04-07 00:59:59.505727 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505730 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505734 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505738 | orchestrator | 2026-04-07 00:59:59.505742 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-07 00:59:59.505745 | orchestrator | Tuesday 07 April 2026 00:53:36 +0000 (0:00:00.307) 0:04:05.609 ********* 2026-04-07 00:59:59.505749 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-07 00:59:59.505753 | orchestrator | 2026-04-07 00:59:59.505757 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-07 00:59:59.505761 | orchestrator | Tuesday 07 April 2026 00:53:37 +0000 (0:00:00.743) 0:04:06.352 ********* 2026-04-07 00:59:59.505765 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.505768 | orchestrator | 2026-04-07 00:59:59.505772 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-07 00:59:59.505778 | orchestrator | Tuesday 07 April 2026 00:53:37 +0000 (0:00:00.246) 0:04:06.599 ********* 2026-04-07 00:59:59.505792 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 00:59:59.505805 | orchestrator | 2026-04-07 00:59:59.505809 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-07 00:59:59.505813 | orchestrator | Tuesday 07 April 2026 00:53:38 +0000 (0:00:01.152) 0:04:07.751 ********* 2026-04-07 00:59:59.505817 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505821 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505824 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505828 | orchestrator | 2026-04-07 00:59:59.505832 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-07 00:59:59.505835 | orchestrator | Tuesday 07 April 2026 00:53:39 +0000 (0:00:00.468) 0:04:08.220 ********* 2026-04-07 00:59:59.505839 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505843 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505846 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505850 | orchestrator | 2026-04-07 00:59:59.505854 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-07 00:59:59.505858 | orchestrator | Tuesday 07 April 2026 00:53:39 +0000 (0:00:00.628) 0:04:08.848 ********* 2026-04-07 00:59:59.505861 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.505865 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.505869 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.505873 | orchestrator | 2026-04-07 00:59:59.505876 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-07 00:59:59.505880 | orchestrator | Tuesday 07 April 2026 00:53:41 +0000 (0:00:01.760) 0:04:10.608 ********* 2026-04-07 00:59:59.505884 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.505888 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.505892 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.505895 | orchestrator | 2026-04-07 00:59:59.505899 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-07 00:59:59.505903 | orchestrator | Tuesday 07 April 2026 00:53:43 +0000 (0:00:01.283) 0:04:11.891 ********* 2026-04-07 00:59:59.505907 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.505910 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.505919 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.505922 | orchestrator | 2026-04-07 00:59:59.505926 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-07 00:59:59.505930 | orchestrator | Tuesday 07 April 2026 00:53:43 +0000 (0:00:00.827) 0:04:12.719 ********* 2026-04-07 00:59:59.505934 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505938 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.505941 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.505945 | orchestrator | 2026-04-07 00:59:59.505949 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-07 00:59:59.505953 | orchestrator | Tuesday 07 April 2026 00:53:44 +0000 (0:00:00.680) 0:04:13.400 ********* 2026-04-07 00:59:59.505956 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.505960 | orchestrator | 2026-04-07 00:59:59.505964 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-07 00:59:59.505968 | orchestrator | Tuesday 07 April 2026 00:53:45 +0000 (0:00:01.142) 0:04:14.542 ********* 2026-04-07 00:59:59.505971 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.505975 | orchestrator | 2026-04-07 00:59:59.505979 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-07 00:59:59.505983 | orchestrator | Tuesday 07 April 2026 00:53:46 +0000 (0:00:00.587) 0:04:15.130 ********* 2026-04-07 00:59:59.505987 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.505990 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.505994 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.505998 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-07 00:59:59.506002 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 00:59:59.506006 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 00:59:59.506086 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 00:59:59.506092 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-04-07 00:59:59.506096 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-07 00:59:59.506100 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-07 00:59:59.506106 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 00:59:59.506109 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-07 00:59:59.506113 | orchestrator | 2026-04-07 00:59:59.506117 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-07 00:59:59.506126 | orchestrator | Tuesday 07 April 2026 00:53:50 +0000 (0:00:04.210) 0:04:19.340 ********* 2026-04-07 00:59:59.506130 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506134 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506138 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506142 | orchestrator | 2026-04-07 00:59:59.506146 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-07 00:59:59.506149 | orchestrator | Tuesday 07 April 2026 00:53:52 +0000 (0:00:01.935) 0:04:21.276 ********* 2026-04-07 00:59:59.506153 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506157 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506161 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506164 | orchestrator | 2026-04-07 00:59:59.506168 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-07 00:59:59.506172 | orchestrator | Tuesday 07 April 2026 00:53:52 +0000 (0:00:00.402) 0:04:21.679 ********* 2026-04-07 00:59:59.506176 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506180 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506183 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506187 | orchestrator | 2026-04-07 00:59:59.506191 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-07 00:59:59.506195 | orchestrator | Tuesday 07 April 2026 00:53:53 +0000 (0:00:00.328) 0:04:22.007 ********* 2026-04-07 00:59:59.506199 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506202 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506206 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506210 | orchestrator | 2026-04-07 00:59:59.506249 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-07 00:59:59.506256 | orchestrator | Tuesday 07 April 2026 00:53:55 +0000 (0:00:02.678) 0:04:24.686 ********* 2026-04-07 00:59:59.506260 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506264 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506267 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506271 | orchestrator | 2026-04-07 00:59:59.506275 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-07 00:59:59.506279 | orchestrator | Tuesday 07 April 2026 00:53:57 +0000 (0:00:01.466) 0:04:26.153 ********* 2026-04-07 00:59:59.506282 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506286 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506290 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506294 | orchestrator | 2026-04-07 00:59:59.506297 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-07 00:59:59.506301 | orchestrator | Tuesday 07 April 2026 00:53:57 +0000 (0:00:00.600) 0:04:26.753 ********* 2026-04-07 00:59:59.506305 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.506309 | orchestrator | 2026-04-07 00:59:59.506317 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-07 00:59:59.506321 | orchestrator | Tuesday 07 April 2026 00:53:58 +0000 (0:00:00.803) 0:04:27.557 ********* 2026-04-07 00:59:59.506325 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506329 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506332 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506340 | orchestrator | 2026-04-07 00:59:59.506344 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-07 00:59:59.506347 | orchestrator | Tuesday 07 April 2026 00:53:59 +0000 (0:00:00.839) 0:04:28.396 ********* 2026-04-07 00:59:59.506351 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506355 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506359 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506363 | orchestrator | 2026-04-07 00:59:59.506367 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-07 00:59:59.506370 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:00.776) 0:04:29.172 ********* 2026-04-07 00:59:59.506374 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.506378 | orchestrator | 2026-04-07 00:59:59.506382 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-07 00:59:59.506386 | orchestrator | Tuesday 07 April 2026 00:54:00 +0000 (0:00:00.633) 0:04:29.806 ********* 2026-04-07 00:59:59.506390 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506394 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506397 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506401 | orchestrator | 2026-04-07 00:59:59.506405 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-07 00:59:59.506409 | orchestrator | Tuesday 07 April 2026 00:54:04 +0000 (0:00:03.126) 0:04:32.933 ********* 2026-04-07 00:59:59.506413 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506416 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506420 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506424 | orchestrator | 2026-04-07 00:59:59.506428 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-07 00:59:59.506432 | orchestrator | Tuesday 07 April 2026 00:54:05 +0000 (0:00:01.005) 0:04:33.939 ********* 2026-04-07 00:59:59.506435 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506439 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506443 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506446 | orchestrator | 2026-04-07 00:59:59.506450 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-07 00:59:59.506454 | orchestrator | Tuesday 07 April 2026 00:54:06 +0000 (0:00:01.712) 0:04:35.651 ********* 2026-04-07 00:59:59.506458 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.506462 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.506465 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.506469 | orchestrator | 2026-04-07 00:59:59.506473 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-07 00:59:59.506479 | orchestrator | Tuesday 07 April 2026 00:54:08 +0000 (0:00:02.036) 0:04:37.688 ********* 2026-04-07 00:59:59.506483 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.506487 | orchestrator | 2026-04-07 00:59:59.506491 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-07 00:59:59.506494 | orchestrator | Tuesday 07 April 2026 00:54:09 +0000 (0:00:00.654) 0:04:38.343 ********* 2026-04-07 00:59:59.506498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-07 00:59:59.506502 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506506 | orchestrator | 2026-04-07 00:59:59.506510 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-07 00:59:59.506514 | orchestrator | Tuesday 07 April 2026 00:54:30 +0000 (0:00:21.322) 0:04:59.665 ********* 2026-04-07 00:59:59.506517 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506521 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506525 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506529 | orchestrator | 2026-04-07 00:59:59.506533 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-07 00:59:59.506540 | orchestrator | Tuesday 07 April 2026 00:54:37 +0000 (0:00:06.227) 0:05:05.893 ********* 2026-04-07 00:59:59.506544 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506547 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506551 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506555 | orchestrator | 2026-04-07 00:59:59.506559 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-07 00:59:59.506562 | orchestrator | Tuesday 07 April 2026 00:54:37 +0000 (0:00:00.342) 0:05:06.235 ********* 2026-04-07 00:59:59.506580 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 00:59:59.506586 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 00:59:59.506590 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 00:59:59.506595 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 00:59:59.506599 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 00:59:59.506603 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6f748df8fc8b2cb4f1071e18fe6b0c9c07a17f8c'}])  2026-04-07 00:59:59.506608 | orchestrator | 2026-04-07 00:59:59.506612 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.506615 | orchestrator | Tuesday 07 April 2026 00:54:47 +0000 (0:00:10.118) 0:05:16.353 ********* 2026-04-07 00:59:59.506619 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506623 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506627 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506630 | orchestrator | 2026-04-07 00:59:59.506634 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-07 00:59:59.506638 | orchestrator | Tuesday 07 April 2026 00:54:47 +0000 (0:00:00.339) 0:05:16.693 ********* 2026-04-07 00:59:59.506642 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-07 00:59:59.506646 | orchestrator | 2026-04-07 00:59:59.506652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-07 00:59:59.506656 | orchestrator | Tuesday 07 April 2026 00:54:48 +0000 (0:00:00.751) 0:05:17.444 ********* 2026-04-07 00:59:59.506662 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506666 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506670 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506674 | orchestrator | 2026-04-07 00:59:59.506677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-07 00:59:59.506681 | orchestrator | Tuesday 07 April 2026 00:54:48 +0000 (0:00:00.319) 0:05:17.764 ********* 2026-04-07 00:59:59.506685 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506689 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506692 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506696 | orchestrator | 2026-04-07 00:59:59.506700 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-07 00:59:59.506704 | orchestrator | Tuesday 07 April 2026 00:54:49 +0000 (0:00:00.376) 0:05:18.141 ********* 2026-04-07 00:59:59.506708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.506711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.506715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.506719 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506723 | orchestrator | 2026-04-07 00:59:59.506726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-07 00:59:59.506730 | orchestrator | Tuesday 07 April 2026 00:54:50 +0000 (0:00:00.832) 0:05:18.973 ********* 2026-04-07 00:59:59.506734 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506738 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506741 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506745 | orchestrator | 2026-04-07 00:59:59.506760 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-07 00:59:59.506764 | orchestrator | 2026-04-07 00:59:59.506768 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.506772 | orchestrator | Tuesday 07 April 2026 00:54:50 +0000 (0:00:00.814) 0:05:19.787 ********* 2026-04-07 00:59:59.506776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.506780 | orchestrator | 2026-04-07 00:59:59.506784 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.506788 | orchestrator | Tuesday 07 April 2026 00:54:51 +0000 (0:00:00.504) 0:05:20.292 ********* 2026-04-07 00:59:59.506792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.506796 | orchestrator | 2026-04-07 00:59:59.506799 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.506803 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:00.726) 0:05:21.019 ********* 2026-04-07 00:59:59.506807 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506811 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506815 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506818 | orchestrator | 2026-04-07 00:59:59.506822 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.506826 | orchestrator | Tuesday 07 April 2026 00:54:52 +0000 (0:00:00.795) 0:05:21.815 ********* 2026-04-07 00:59:59.506830 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506833 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506837 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506841 | orchestrator | 2026-04-07 00:59:59.506845 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.506849 | orchestrator | Tuesday 07 April 2026 00:54:53 +0000 (0:00:00.312) 0:05:22.127 ********* 2026-04-07 00:59:59.506853 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506856 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506860 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506864 | orchestrator | 2026-04-07 00:59:59.506870 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.506874 | orchestrator | Tuesday 07 April 2026 00:54:53 +0000 (0:00:00.321) 0:05:22.449 ********* 2026-04-07 00:59:59.506878 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506882 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506885 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506889 | orchestrator | 2026-04-07 00:59:59.506893 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.506897 | orchestrator | Tuesday 07 April 2026 00:54:53 +0000 (0:00:00.300) 0:05:22.750 ********* 2026-04-07 00:59:59.506901 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506904 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506908 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506912 | orchestrator | 2026-04-07 00:59:59.506916 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.506920 | orchestrator | Tuesday 07 April 2026 00:54:55 +0000 (0:00:01.278) 0:05:24.028 ********* 2026-04-07 00:59:59.506923 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506927 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506931 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506934 | orchestrator | 2026-04-07 00:59:59.506938 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.506942 | orchestrator | Tuesday 07 April 2026 00:54:55 +0000 (0:00:00.325) 0:05:24.354 ********* 2026-04-07 00:59:59.506946 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.506950 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.506953 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.506957 | orchestrator | 2026-04-07 00:59:59.506961 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.506965 | orchestrator | Tuesday 07 April 2026 00:54:55 +0000 (0:00:00.311) 0:05:24.665 ********* 2026-04-07 00:59:59.506968 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506972 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.506976 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.506979 | orchestrator | 2026-04-07 00:59:59.506983 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.506989 | orchestrator | Tuesday 07 April 2026 00:54:56 +0000 (0:00:00.772) 0:05:25.438 ********* 2026-04-07 00:59:59.506993 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.506997 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507001 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507004 | orchestrator | 2026-04-07 00:59:59.507008 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.507012 | orchestrator | Tuesday 07 April 2026 00:54:57 +0000 (0:00:01.181) 0:05:26.620 ********* 2026-04-07 00:59:59.507016 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507019 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507023 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507027 | orchestrator | 2026-04-07 00:59:59.507031 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.507034 | orchestrator | Tuesday 07 April 2026 00:54:58 +0000 (0:00:00.451) 0:05:27.072 ********* 2026-04-07 00:59:59.507038 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507042 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507046 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507050 | orchestrator | 2026-04-07 00:59:59.507053 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.507057 | orchestrator | Tuesday 07 April 2026 00:54:58 +0000 (0:00:00.361) 0:05:27.434 ********* 2026-04-07 00:59:59.507061 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507065 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507068 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507072 | orchestrator | 2026-04-07 00:59:59.507076 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.507082 | orchestrator | Tuesday 07 April 2026 00:54:58 +0000 (0:00:00.409) 0:05:27.844 ********* 2026-04-07 00:59:59.507086 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507090 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507104 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507109 | orchestrator | 2026-04-07 00:59:59.507113 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.507117 | orchestrator | Tuesday 07 April 2026 00:54:59 +0000 (0:00:00.641) 0:05:28.486 ********* 2026-04-07 00:59:59.507120 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507124 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507128 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507132 | orchestrator | 2026-04-07 00:59:59.507136 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.507139 | orchestrator | Tuesday 07 April 2026 00:54:59 +0000 (0:00:00.297) 0:05:28.783 ********* 2026-04-07 00:59:59.507143 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507147 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507151 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507154 | orchestrator | 2026-04-07 00:59:59.507158 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.507162 | orchestrator | Tuesday 07 April 2026 00:55:00 +0000 (0:00:00.330) 0:05:29.114 ********* 2026-04-07 00:59:59.507166 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507169 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507173 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507177 | orchestrator | 2026-04-07 00:59:59.507181 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.507185 | orchestrator | Tuesday 07 April 2026 00:55:00 +0000 (0:00:00.313) 0:05:29.428 ********* 2026-04-07 00:59:59.507188 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507192 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507196 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507200 | orchestrator | 2026-04-07 00:59:59.507204 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.507208 | orchestrator | Tuesday 07 April 2026 00:55:01 +0000 (0:00:00.575) 0:05:30.003 ********* 2026-04-07 00:59:59.507211 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507224 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507229 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507233 | orchestrator | 2026-04-07 00:59:59.507236 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.507240 | orchestrator | Tuesday 07 April 2026 00:55:01 +0000 (0:00:00.372) 0:05:30.376 ********* 2026-04-07 00:59:59.507244 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507248 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507251 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507255 | orchestrator | 2026-04-07 00:59:59.507259 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-07 00:59:59.507263 | orchestrator | Tuesday 07 April 2026 00:55:02 +0000 (0:00:00.544) 0:05:30.920 ********* 2026-04-07 00:59:59.507266 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 00:59:59.507270 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.507274 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.507278 | orchestrator | 2026-04-07 00:59:59.507282 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-07 00:59:59.507285 | orchestrator | Tuesday 07 April 2026 00:55:02 +0000 (0:00:00.869) 0:05:31.790 ********* 2026-04-07 00:59:59.507289 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.507293 | orchestrator | 2026-04-07 00:59:59.507297 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-07 00:59:59.507300 | orchestrator | Tuesday 07 April 2026 00:55:03 +0000 (0:00:00.794) 0:05:32.584 ********* 2026-04-07 00:59:59.507307 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507311 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507315 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507318 | orchestrator | 2026-04-07 00:59:59.507322 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-07 00:59:59.507326 | orchestrator | Tuesday 07 April 2026 00:55:04 +0000 (0:00:00.751) 0:05:33.336 ********* 2026-04-07 00:59:59.507330 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507333 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507337 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507341 | orchestrator | 2026-04-07 00:59:59.507347 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-07 00:59:59.507351 | orchestrator | Tuesday 07 April 2026 00:55:04 +0000 (0:00:00.315) 0:05:33.651 ********* 2026-04-07 00:59:59.507355 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.507358 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.507362 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.507366 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-07 00:59:59.507370 | orchestrator | 2026-04-07 00:59:59.507373 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-07 00:59:59.507377 | orchestrator | Tuesday 07 April 2026 00:55:12 +0000 (0:00:07.828) 0:05:41.480 ********* 2026-04-07 00:59:59.507381 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507385 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507388 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507392 | orchestrator | 2026-04-07 00:59:59.507396 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-07 00:59:59.507400 | orchestrator | Tuesday 07 April 2026 00:55:13 +0000 (0:00:00.640) 0:05:42.121 ********* 2026-04-07 00:59:59.507403 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 00:59:59.507407 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 00:59:59.507411 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 00:59:59.507415 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.507418 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.507422 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.507426 | orchestrator | 2026-04-07 00:59:59.507442 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-07 00:59:59.507446 | orchestrator | Tuesday 07 April 2026 00:55:15 +0000 (0:00:01.904) 0:05:44.025 ********* 2026-04-07 00:59:59.507450 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 00:59:59.507454 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 00:59:59.507458 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 00:59:59.507461 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 00:59:59.507465 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-07 00:59:59.507469 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-07 00:59:59.507473 | orchestrator | 2026-04-07 00:59:59.507477 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-07 00:59:59.507480 | orchestrator | Tuesday 07 April 2026 00:55:16 +0000 (0:00:01.522) 0:05:45.547 ********* 2026-04-07 00:59:59.507484 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507488 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507492 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507496 | orchestrator | 2026-04-07 00:59:59.507499 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-07 00:59:59.507503 | orchestrator | Tuesday 07 April 2026 00:55:17 +0000 (0:00:00.918) 0:05:46.466 ********* 2026-04-07 00:59:59.507507 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507511 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507518 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507522 | orchestrator | 2026-04-07 00:59:59.507526 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-07 00:59:59.507529 | orchestrator | Tuesday 07 April 2026 00:55:18 +0000 (0:00:00.553) 0:05:47.019 ********* 2026-04-07 00:59:59.507533 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507537 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507545 | orchestrator | 2026-04-07 00:59:59.507548 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-07 00:59:59.507552 | orchestrator | Tuesday 07 April 2026 00:55:18 +0000 (0:00:00.335) 0:05:47.355 ********* 2026-04-07 00:59:59.507556 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.507560 | orchestrator | 2026-04-07 00:59:59.507564 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-07 00:59:59.507568 | orchestrator | Tuesday 07 April 2026 00:55:18 +0000 (0:00:00.482) 0:05:47.837 ********* 2026-04-07 00:59:59.507571 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507575 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507579 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507583 | orchestrator | 2026-04-07 00:59:59.507587 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-07 00:59:59.507590 | orchestrator | Tuesday 07 April 2026 00:55:19 +0000 (0:00:00.307) 0:05:48.145 ********* 2026-04-07 00:59:59.507594 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507598 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507602 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.507605 | orchestrator | 2026-04-07 00:59:59.507609 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-07 00:59:59.507613 | orchestrator | Tuesday 07 April 2026 00:55:19 +0000 (0:00:00.545) 0:05:48.691 ********* 2026-04-07 00:59:59.507617 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.507621 | orchestrator | 2026-04-07 00:59:59.507625 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-07 00:59:59.507628 | orchestrator | Tuesday 07 April 2026 00:55:20 +0000 (0:00:00.519) 0:05:49.211 ********* 2026-04-07 00:59:59.507632 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507636 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507640 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507643 | orchestrator | 2026-04-07 00:59:59.507647 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-07 00:59:59.507651 | orchestrator | Tuesday 07 April 2026 00:55:21 +0000 (0:00:01.165) 0:05:50.377 ********* 2026-04-07 00:59:59.507655 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507659 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507665 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507669 | orchestrator | 2026-04-07 00:59:59.507673 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-07 00:59:59.507677 | orchestrator | Tuesday 07 April 2026 00:55:22 +0000 (0:00:01.394) 0:05:51.771 ********* 2026-04-07 00:59:59.507680 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507684 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507688 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507692 | orchestrator | 2026-04-07 00:59:59.507696 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-07 00:59:59.507699 | orchestrator | Tuesday 07 April 2026 00:55:24 +0000 (0:00:01.941) 0:05:53.713 ********* 2026-04-07 00:59:59.507703 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507707 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507711 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507719 | orchestrator | 2026-04-07 00:59:59.507724 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-07 00:59:59.507730 | orchestrator | Tuesday 07 April 2026 00:55:26 +0000 (0:00:02.089) 0:05:55.802 ********* 2026-04-07 00:59:59.507733 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507737 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.507741 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-07 00:59:59.507745 | orchestrator | 2026-04-07 00:59:59.507748 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-07 00:59:59.507752 | orchestrator | Tuesday 07 April 2026 00:55:27 +0000 (0:00:00.442) 0:05:56.245 ********* 2026-04-07 00:59:59.507756 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-07 00:59:59.507771 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-07 00:59:59.507775 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.507779 | orchestrator | 2026-04-07 00:59:59.507783 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-07 00:59:59.507787 | orchestrator | Tuesday 07 April 2026 00:55:40 +0000 (0:00:13.352) 0:06:09.597 ********* 2026-04-07 00:59:59.507791 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.507794 | orchestrator | 2026-04-07 00:59:59.507798 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-07 00:59:59.507802 | orchestrator | Tuesday 07 April 2026 00:55:42 +0000 (0:00:01.274) 0:06:10.872 ********* 2026-04-07 00:59:59.507806 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507810 | orchestrator | 2026-04-07 00:59:59.507813 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-07 00:59:59.507817 | orchestrator | Tuesday 07 April 2026 00:55:42 +0000 (0:00:00.310) 0:06:11.183 ********* 2026-04-07 00:59:59.507821 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507825 | orchestrator | 2026-04-07 00:59:59.507828 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-07 00:59:59.507832 | orchestrator | Tuesday 07 April 2026 00:55:42 +0000 (0:00:00.166) 0:06:11.349 ********* 2026-04-07 00:59:59.507836 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-07 00:59:59.507840 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-07 00:59:59.507844 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-07 00:59:59.507847 | orchestrator | 2026-04-07 00:59:59.507851 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-07 00:59:59.507855 | orchestrator | Tuesday 07 April 2026 00:55:48 +0000 (0:00:05.948) 0:06:17.297 ********* 2026-04-07 00:59:59.507859 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-07 00:59:59.507862 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-07 00:59:59.507866 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-07 00:59:59.507870 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-07 00:59:59.507874 | orchestrator | 2026-04-07 00:59:59.507877 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.507881 | orchestrator | Tuesday 07 April 2026 00:55:52 +0000 (0:00:04.408) 0:06:21.705 ********* 2026-04-07 00:59:59.507885 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507889 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507892 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507896 | orchestrator | 2026-04-07 00:59:59.507900 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-07 00:59:59.507904 | orchestrator | Tuesday 07 April 2026 00:55:53 +0000 (0:00:01.056) 0:06:22.762 ********* 2026-04-07 00:59:59.507908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 00:59:59.507915 | orchestrator | 2026-04-07 00:59:59.507919 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-07 00:59:59.507928 | orchestrator | Tuesday 07 April 2026 00:55:54 +0000 (0:00:00.516) 0:06:23.279 ********* 2026-04-07 00:59:59.507932 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.507936 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.507939 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.507943 | orchestrator | 2026-04-07 00:59:59.507947 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-07 00:59:59.507951 | orchestrator | Tuesday 07 April 2026 00:55:54 +0000 (0:00:00.330) 0:06:23.609 ********* 2026-04-07 00:59:59.507954 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.507958 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.507962 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.507966 | orchestrator | 2026-04-07 00:59:59.507969 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-07 00:59:59.507975 | orchestrator | Tuesday 07 April 2026 00:55:56 +0000 (0:00:01.601) 0:06:25.211 ********* 2026-04-07 00:59:59.507979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 00:59:59.507983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 00:59:59.507986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 00:59:59.507990 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.507994 | orchestrator | 2026-04-07 00:59:59.507998 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-07 00:59:59.508001 | orchestrator | Tuesday 07 April 2026 00:55:57 +0000 (0:00:00.656) 0:06:25.868 ********* 2026-04-07 00:59:59.508005 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.508009 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.508013 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.508016 | orchestrator | 2026-04-07 00:59:59.508020 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-07 00:59:59.508024 | orchestrator | 2026-04-07 00:59:59.508028 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.508032 | orchestrator | Tuesday 07 April 2026 00:55:57 +0000 (0:00:00.594) 0:06:26.462 ********* 2026-04-07 00:59:59.508036 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-07 00:59:59.508039 | orchestrator | 2026-04-07 00:59:59.508043 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.508047 | orchestrator | Tuesday 07 April 2026 00:55:58 +0000 (0:00:00.721) 0:06:27.184 ********* 2026-04-07 00:59:59.508051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.508055 | orchestrator | 2026-04-07 00:59:59.508070 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.508075 | orchestrator | Tuesday 07 April 2026 00:55:58 +0000 (0:00:00.450) 0:06:27.634 ********* 2026-04-07 00:59:59.508079 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508083 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508086 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508090 | orchestrator | 2026-04-07 00:59:59.508094 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.508098 | orchestrator | Tuesday 07 April 2026 00:55:59 +0000 (0:00:00.260) 0:06:27.894 ********* 2026-04-07 00:59:59.508102 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508105 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508109 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508113 | orchestrator | 2026-04-07 00:59:59.508117 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.508120 | orchestrator | Tuesday 07 April 2026 00:55:59 +0000 (0:00:00.897) 0:06:28.792 ********* 2026-04-07 00:59:59.508124 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508131 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508135 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508138 | orchestrator | 2026-04-07 00:59:59.508142 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.508146 | orchestrator | Tuesday 07 April 2026 00:56:00 +0000 (0:00:00.666) 0:06:29.459 ********* 2026-04-07 00:59:59.508150 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508153 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508157 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508161 | orchestrator | 2026-04-07 00:59:59.508165 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.508168 | orchestrator | Tuesday 07 April 2026 00:56:01 +0000 (0:00:00.783) 0:06:30.243 ********* 2026-04-07 00:59:59.508172 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508176 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508180 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508183 | orchestrator | 2026-04-07 00:59:59.508187 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.508192 | orchestrator | Tuesday 07 April 2026 00:56:01 +0000 (0:00:00.297) 0:06:30.541 ********* 2026-04-07 00:59:59.508195 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508199 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508203 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508207 | orchestrator | 2026-04-07 00:59:59.508210 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.508214 | orchestrator | Tuesday 07 April 2026 00:56:02 +0000 (0:00:00.431) 0:06:30.973 ********* 2026-04-07 00:59:59.508231 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508235 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508239 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508243 | orchestrator | 2026-04-07 00:59:59.508247 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.508251 | orchestrator | Tuesday 07 April 2026 00:56:02 +0000 (0:00:00.262) 0:06:31.235 ********* 2026-04-07 00:59:59.508255 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508258 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508262 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508266 | orchestrator | 2026-04-07 00:59:59.508270 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.508273 | orchestrator | Tuesday 07 April 2026 00:56:03 +0000 (0:00:00.659) 0:06:31.895 ********* 2026-04-07 00:59:59.508277 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508281 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508285 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508288 | orchestrator | 2026-04-07 00:59:59.508292 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.508296 | orchestrator | Tuesday 07 April 2026 00:56:03 +0000 (0:00:00.648) 0:06:32.543 ********* 2026-04-07 00:59:59.508300 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508304 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508307 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508311 | orchestrator | 2026-04-07 00:59:59.508315 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.508319 | orchestrator | Tuesday 07 April 2026 00:56:04 +0000 (0:00:00.542) 0:06:33.086 ********* 2026-04-07 00:59:59.508322 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508326 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508332 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508336 | orchestrator | 2026-04-07 00:59:59.508340 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.508344 | orchestrator | Tuesday 07 April 2026 00:56:04 +0000 (0:00:00.291) 0:06:33.377 ********* 2026-04-07 00:59:59.508348 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508351 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508355 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508361 | orchestrator | 2026-04-07 00:59:59.508370 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.508374 | orchestrator | Tuesday 07 April 2026 00:56:04 +0000 (0:00:00.322) 0:06:33.699 ********* 2026-04-07 00:59:59.508378 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508382 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508386 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508389 | orchestrator | 2026-04-07 00:59:59.508393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.508397 | orchestrator | Tuesday 07 April 2026 00:56:05 +0000 (0:00:00.318) 0:06:34.018 ********* 2026-04-07 00:59:59.508401 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508405 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508408 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508412 | orchestrator | 2026-04-07 00:59:59.508416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.508419 | orchestrator | Tuesday 07 April 2026 00:56:05 +0000 (0:00:00.604) 0:06:34.623 ********* 2026-04-07 00:59:59.508423 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508427 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508431 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508434 | orchestrator | 2026-04-07 00:59:59.508438 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.508444 | orchestrator | Tuesday 07 April 2026 00:56:06 +0000 (0:00:00.311) 0:06:34.934 ********* 2026-04-07 00:59:59.508448 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508452 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508455 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508459 | orchestrator | 2026-04-07 00:59:59.508463 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.508466 | orchestrator | Tuesday 07 April 2026 00:56:06 +0000 (0:00:00.316) 0:06:35.250 ********* 2026-04-07 00:59:59.508470 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508474 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508478 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508481 | orchestrator | 2026-04-07 00:59:59.508485 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.508489 | orchestrator | Tuesday 07 April 2026 00:56:06 +0000 (0:00:00.283) 0:06:35.534 ********* 2026-04-07 00:59:59.508493 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508497 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508500 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508504 | orchestrator | 2026-04-07 00:59:59.508508 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.508512 | orchestrator | Tuesday 07 April 2026 00:56:07 +0000 (0:00:00.492) 0:06:36.027 ********* 2026-04-07 00:59:59.508515 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508519 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508523 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508526 | orchestrator | 2026-04-07 00:59:59.508530 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-07 00:59:59.508534 | orchestrator | Tuesday 07 April 2026 00:56:07 +0000 (0:00:00.472) 0:06:36.499 ********* 2026-04-07 00:59:59.508538 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508542 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508545 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508549 | orchestrator | 2026-04-07 00:59:59.508553 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-07 00:59:59.508557 | orchestrator | Tuesday 07 April 2026 00:56:07 +0000 (0:00:00.263) 0:06:36.763 ********* 2026-04-07 00:59:59.508560 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 00:59:59.508564 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 00:59:59.508568 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 00:59:59.508574 | orchestrator | 2026-04-07 00:59:59.508578 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-07 00:59:59.508582 | orchestrator | Tuesday 07 April 2026 00:56:08 +0000 (0:00:00.715) 0:06:37.478 ********* 2026-04-07 00:59:59.508585 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.508589 | orchestrator | 2026-04-07 00:59:59.508593 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-07 00:59:59.508597 | orchestrator | Tuesday 07 April 2026 00:56:09 +0000 (0:00:00.616) 0:06:38.095 ********* 2026-04-07 00:59:59.508600 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508604 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508608 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508612 | orchestrator | 2026-04-07 00:59:59.508621 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-07 00:59:59.508625 | orchestrator | Tuesday 07 April 2026 00:56:09 +0000 (0:00:00.251) 0:06:38.346 ********* 2026-04-07 00:59:59.508629 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508632 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508636 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508640 | orchestrator | 2026-04-07 00:59:59.508644 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-07 00:59:59.508647 | orchestrator | Tuesday 07 April 2026 00:56:09 +0000 (0:00:00.257) 0:06:38.604 ********* 2026-04-07 00:59:59.508651 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508655 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508659 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508662 | orchestrator | 2026-04-07 00:59:59.508666 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-07 00:59:59.508672 | orchestrator | Tuesday 07 April 2026 00:56:10 +0000 (0:00:00.806) 0:06:39.410 ********* 2026-04-07 00:59:59.508676 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.508679 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.508683 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.508687 | orchestrator | 2026-04-07 00:59:59.508691 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-07 00:59:59.508694 | orchestrator | Tuesday 07 April 2026 00:56:10 +0000 (0:00:00.286) 0:06:39.696 ********* 2026-04-07 00:59:59.508698 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 00:59:59.508702 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 00:59:59.508706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 00:59:59.508710 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 00:59:59.508713 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 00:59:59.508717 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 00:59:59.508721 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 00:59:59.508725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 00:59:59.508728 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 00:59:59.508735 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 00:59:59.508739 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 00:59:59.508742 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 00:59:59.508746 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 00:59:59.508750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 00:59:59.508756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 00:59:59.508760 | orchestrator | 2026-04-07 00:59:59.508764 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-07 00:59:59.508772 | orchestrator | Tuesday 07 April 2026 00:56:14 +0000 (0:00:03.243) 0:06:42.940 ********* 2026-04-07 00:59:59.508776 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.508780 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.508784 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.508787 | orchestrator | 2026-04-07 00:59:59.508791 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-07 00:59:59.508795 | orchestrator | Tuesday 07 April 2026 00:56:14 +0000 (0:00:00.313) 0:06:43.254 ********* 2026-04-07 00:59:59.508799 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.508802 | orchestrator | 2026-04-07 00:59:59.508806 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-07 00:59:59.508810 | orchestrator | Tuesday 07 April 2026 00:56:15 +0000 (0:00:00.820) 0:06:44.074 ********* 2026-04-07 00:59:59.508814 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 00:59:59.508822 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 00:59:59.508826 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 00:59:59.508830 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-07 00:59:59.508834 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-07 00:59:59.508837 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-07 00:59:59.508841 | orchestrator | 2026-04-07 00:59:59.508845 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-07 00:59:59.508849 | orchestrator | Tuesday 07 April 2026 00:56:16 +0000 (0:00:00.910) 0:06:44.984 ********* 2026-04-07 00:59:59.508853 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.508856 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.508860 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.508864 | orchestrator | 2026-04-07 00:59:59.508868 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-07 00:59:59.508872 | orchestrator | Tuesday 07 April 2026 00:56:17 +0000 (0:00:01.496) 0:06:46.481 ********* 2026-04-07 00:59:59.508875 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 00:59:59.508879 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.508883 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.508887 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 00:59:59.508890 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 00:59:59.508894 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.508898 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 00:59:59.508902 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 00:59:59.508906 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.508909 | orchestrator | 2026-04-07 00:59:59.508913 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-07 00:59:59.508917 | orchestrator | Tuesday 07 April 2026 00:56:18 +0000 (0:00:01.183) 0:06:47.665 ********* 2026-04-07 00:59:59.508921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.508924 | orchestrator | 2026-04-07 00:59:59.508928 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-07 00:59:59.508934 | orchestrator | Tuesday 07 April 2026 00:56:20 +0000 (0:00:01.821) 0:06:49.487 ********* 2026-04-07 00:59:59.508938 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.508944 | orchestrator | 2026-04-07 00:59:59.508948 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-07 00:59:59.508956 | orchestrator | Tuesday 07 April 2026 00:56:21 +0000 (0:00:00.580) 0:06:50.068 ********* 2026-04-07 00:59:59.508960 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43d30fb7-a654-5dbf-ba50-28c21932998c', 'data_vg': 'ceph-43d30fb7-a654-5dbf-ba50-28c21932998c'}) 2026-04-07 00:59:59.508965 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0', 'data_vg': 'ceph-959bec69-a72e-5ac6-9cdc-b8ec54ca62e0'}) 2026-04-07 00:59:59.508969 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-68f67d56-373d-5470-8a0c-a7bd578cf9eb', 'data_vg': 'ceph-68f67d56-373d-5470-8a0c-a7bd578cf9eb'}) 2026-04-07 00:59:59.508972 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db8a0de8-f58a-5642-89e2-a8dce5d117db', 'data_vg': 'ceph-db8a0de8-f58a-5642-89e2-a8dce5d117db'}) 2026-04-07 00:59:59.508976 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27d9f8cd-a6eb-5015-929a-744349431582', 'data_vg': 'ceph-27d9f8cd-a6eb-5015-929a-744349431582'}) 2026-04-07 00:59:59.508982 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d', 'data_vg': 'ceph-eae9bbfc-ddf3-58b9-bffe-50f4fd603d5d'}) 2026-04-07 00:59:59.508986 | orchestrator | 2026-04-07 00:59:59.508989 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-07 00:59:59.508993 | orchestrator | Tuesday 07 April 2026 00:57:00 +0000 (0:00:39.316) 0:07:29.384 ********* 2026-04-07 00:59:59.508997 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509001 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509005 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509008 | orchestrator | 2026-04-07 00:59:59.509012 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-07 00:59:59.509016 | orchestrator | Tuesday 07 April 2026 00:57:01 +0000 (0:00:00.538) 0:07:29.923 ********* 2026-04-07 00:59:59.509020 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.509024 | orchestrator | 2026-04-07 00:59:59.509028 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-07 00:59:59.509031 | orchestrator | Tuesday 07 April 2026 00:57:01 +0000 (0:00:00.513) 0:07:30.436 ********* 2026-04-07 00:59:59.509035 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.509039 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.509043 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.509046 | orchestrator | 2026-04-07 00:59:59.509050 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-07 00:59:59.509054 | orchestrator | Tuesday 07 April 2026 00:57:02 +0000 (0:00:00.611) 0:07:31.047 ********* 2026-04-07 00:59:59.509058 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.509062 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.509065 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.509069 | orchestrator | 2026-04-07 00:59:59.509073 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-07 00:59:59.509077 | orchestrator | Tuesday 07 April 2026 00:57:03 +0000 (0:00:01.589) 0:07:32.637 ********* 2026-04-07 00:59:59.509081 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.509084 | orchestrator | 2026-04-07 00:59:59.509088 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-07 00:59:59.509092 | orchestrator | Tuesday 07 April 2026 00:57:04 +0000 (0:00:00.498) 0:07:33.135 ********* 2026-04-07 00:59:59.509096 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.509100 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.509103 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.509107 | orchestrator | 2026-04-07 00:59:59.509111 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-07 00:59:59.509117 | orchestrator | Tuesday 07 April 2026 00:57:05 +0000 (0:00:01.243) 0:07:34.378 ********* 2026-04-07 00:59:59.509121 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.509125 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.509129 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.509132 | orchestrator | 2026-04-07 00:59:59.509136 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-07 00:59:59.509140 | orchestrator | Tuesday 07 April 2026 00:57:07 +0000 (0:00:01.547) 0:07:35.926 ********* 2026-04-07 00:59:59.509144 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.509148 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.509151 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.509155 | orchestrator | 2026-04-07 00:59:59.509159 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-07 00:59:59.509163 | orchestrator | Tuesday 07 April 2026 00:57:09 +0000 (0:00:01.960) 0:07:37.886 ********* 2026-04-07 00:59:59.509167 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509170 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509174 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509178 | orchestrator | 2026-04-07 00:59:59.509182 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-07 00:59:59.509185 | orchestrator | Tuesday 07 April 2026 00:57:09 +0000 (0:00:00.308) 0:07:38.194 ********* 2026-04-07 00:59:59.509189 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509193 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509197 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509200 | orchestrator | 2026-04-07 00:59:59.509206 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-07 00:59:59.509210 | orchestrator | Tuesday 07 April 2026 00:57:09 +0000 (0:00:00.304) 0:07:38.498 ********* 2026-04-07 00:59:59.509214 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 00:59:59.509247 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-07 00:59:59.509251 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-07 00:59:59.509255 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-07 00:59:59.509258 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-07 00:59:59.509262 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-07 00:59:59.509266 | orchestrator | 2026-04-07 00:59:59.509270 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-07 00:59:59.509273 | orchestrator | Tuesday 07 April 2026 00:57:11 +0000 (0:00:01.414) 0:07:39.913 ********* 2026-04-07 00:59:59.509277 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-07 00:59:59.509281 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-07 00:59:59.509285 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-07 00:59:59.509289 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-07 00:59:59.509293 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-07 00:59:59.509296 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-07 00:59:59.509300 | orchestrator | 2026-04-07 00:59:59.509304 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-07 00:59:59.509308 | orchestrator | Tuesday 07 April 2026 00:57:13 +0000 (0:00:02.267) 0:07:42.181 ********* 2026-04-07 00:59:59.509311 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-07 00:59:59.509315 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-07 00:59:59.509319 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-07 00:59:59.509323 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-07 00:59:59.509329 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-07 00:59:59.509333 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-07 00:59:59.509336 | orchestrator | 2026-04-07 00:59:59.509340 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-07 00:59:59.509344 | orchestrator | Tuesday 07 April 2026 00:57:16 +0000 (0:00:03.677) 0:07:45.858 ********* 2026-04-07 00:59:59.509348 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509354 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.509362 | orchestrator | 2026-04-07 00:59:59.509366 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-07 00:59:59.509369 | orchestrator | Tuesday 07 April 2026 00:57:19 +0000 (0:00:02.142) 0:07:48.000 ********* 2026-04-07 00:59:59.509373 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509377 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509381 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-07 00:59:59.509385 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.509388 | orchestrator | 2026-04-07 00:59:59.509392 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-07 00:59:59.509396 | orchestrator | Tuesday 07 April 2026 00:57:31 +0000 (0:00:12.432) 0:08:00.433 ********* 2026-04-07 00:59:59.509400 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509403 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509407 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509411 | orchestrator | 2026-04-07 00:59:59.509415 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.509419 | orchestrator | Tuesday 07 April 2026 00:57:32 +0000 (0:00:00.808) 0:08:01.242 ********* 2026-04-07 00:59:59.509422 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509426 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509430 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509434 | orchestrator | 2026-04-07 00:59:59.509437 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-07 00:59:59.509441 | orchestrator | Tuesday 07 April 2026 00:57:33 +0000 (0:00:00.693) 0:08:01.935 ********* 2026-04-07 00:59:59.509445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.509449 | orchestrator | 2026-04-07 00:59:59.509452 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-07 00:59:59.509456 | orchestrator | Tuesday 07 April 2026 00:57:33 +0000 (0:00:00.556) 0:08:02.491 ********* 2026-04-07 00:59:59.509460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.509464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.509467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.509471 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509475 | orchestrator | 2026-04-07 00:59:59.509479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-07 00:59:59.509483 | orchestrator | Tuesday 07 April 2026 00:57:33 +0000 (0:00:00.373) 0:08:02.864 ********* 2026-04-07 00:59:59.509486 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509490 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509494 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509498 | orchestrator | 2026-04-07 00:59:59.509501 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-07 00:59:59.509505 | orchestrator | Tuesday 07 April 2026 00:57:34 +0000 (0:00:00.332) 0:08:03.197 ********* 2026-04-07 00:59:59.509509 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509513 | orchestrator | 2026-04-07 00:59:59.509516 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-07 00:59:59.509520 | orchestrator | Tuesday 07 April 2026 00:57:35 +0000 (0:00:00.813) 0:08:04.010 ********* 2026-04-07 00:59:59.509524 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509528 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509531 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509535 | orchestrator | 2026-04-07 00:59:59.509539 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-07 00:59:59.509545 | orchestrator | Tuesday 07 April 2026 00:57:35 +0000 (0:00:00.347) 0:08:04.358 ********* 2026-04-07 00:59:59.509551 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509555 | orchestrator | 2026-04-07 00:59:59.509558 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-07 00:59:59.509562 | orchestrator | Tuesday 07 April 2026 00:57:35 +0000 (0:00:00.242) 0:08:04.600 ********* 2026-04-07 00:59:59.509566 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509570 | orchestrator | 2026-04-07 00:59:59.509573 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-07 00:59:59.509577 | orchestrator | Tuesday 07 April 2026 00:57:35 +0000 (0:00:00.252) 0:08:04.852 ********* 2026-04-07 00:59:59.509581 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509585 | orchestrator | 2026-04-07 00:59:59.509589 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-07 00:59:59.509592 | orchestrator | Tuesday 07 April 2026 00:57:36 +0000 (0:00:00.126) 0:08:04.979 ********* 2026-04-07 00:59:59.509596 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509600 | orchestrator | 2026-04-07 00:59:59.509604 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-07 00:59:59.509607 | orchestrator | Tuesday 07 April 2026 00:57:36 +0000 (0:00:00.273) 0:08:05.253 ********* 2026-04-07 00:59:59.509611 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509615 | orchestrator | 2026-04-07 00:59:59.509619 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-07 00:59:59.509622 | orchestrator | Tuesday 07 April 2026 00:57:36 +0000 (0:00:00.225) 0:08:05.478 ********* 2026-04-07 00:59:59.509626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.509630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.509635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.509639 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509643 | orchestrator | 2026-04-07 00:59:59.509647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-07 00:59:59.509651 | orchestrator | Tuesday 07 April 2026 00:57:37 +0000 (0:00:00.389) 0:08:05.868 ********* 2026-04-07 00:59:59.509654 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509658 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509667 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509671 | orchestrator | 2026-04-07 00:59:59.509674 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-07 00:59:59.509678 | orchestrator | Tuesday 07 April 2026 00:57:37 +0000 (0:00:00.657) 0:08:06.526 ********* 2026-04-07 00:59:59.509682 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509686 | orchestrator | 2026-04-07 00:59:59.509690 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-07 00:59:59.509693 | orchestrator | Tuesday 07 April 2026 00:57:37 +0000 (0:00:00.253) 0:08:06.779 ********* 2026-04-07 00:59:59.509697 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509701 | orchestrator | 2026-04-07 00:59:59.509705 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-07 00:59:59.509708 | orchestrator | 2026-04-07 00:59:59.509716 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.509720 | orchestrator | Tuesday 07 April 2026 00:57:38 +0000 (0:00:00.675) 0:08:07.455 ********* 2026-04-07 00:59:59.509724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.509729 | orchestrator | 2026-04-07 00:59:59.509732 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.509736 | orchestrator | Tuesday 07 April 2026 00:57:39 +0000 (0:00:01.235) 0:08:08.690 ********* 2026-04-07 00:59:59.509740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.509748 | orchestrator | 2026-04-07 00:59:59.509752 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.509755 | orchestrator | Tuesday 07 April 2026 00:57:41 +0000 (0:00:01.317) 0:08:10.008 ********* 2026-04-07 00:59:59.509759 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509763 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509767 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509771 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.509774 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.509778 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.509782 | orchestrator | 2026-04-07 00:59:59.509786 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.509789 | orchestrator | Tuesday 07 April 2026 00:57:42 +0000 (0:00:01.081) 0:08:11.090 ********* 2026-04-07 00:59:59.509793 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.509797 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.509801 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.509805 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.509808 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.509812 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.509816 | orchestrator | 2026-04-07 00:59:59.509820 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.509823 | orchestrator | Tuesday 07 April 2026 00:57:43 +0000 (0:00:01.062) 0:08:12.153 ********* 2026-04-07 00:59:59.509827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.509831 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.509835 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.509838 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.509842 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.509846 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.509850 | orchestrator | 2026-04-07 00:59:59.509854 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.509857 | orchestrator | Tuesday 07 April 2026 00:57:44 +0000 (0:00:01.298) 0:08:13.452 ********* 2026-04-07 00:59:59.509861 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.509867 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.509871 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.509875 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.509879 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.509882 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.509886 | orchestrator | 2026-04-07 00:59:59.509890 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.509894 | orchestrator | Tuesday 07 April 2026 00:57:45 +0000 (0:00:01.019) 0:08:14.471 ********* 2026-04-07 00:59:59.509897 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509901 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509905 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.509909 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509912 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.509916 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.509920 | orchestrator | 2026-04-07 00:59:59.509924 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.509928 | orchestrator | Tuesday 07 April 2026 00:57:46 +0000 (0:00:00.953) 0:08:15.424 ********* 2026-04-07 00:59:59.509931 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.509935 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.509939 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.509943 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509947 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509950 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509954 | orchestrator | 2026-04-07 00:59:59.509958 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.509962 | orchestrator | Tuesday 07 April 2026 00:57:47 +0000 (0:00:00.578) 0:08:16.003 ********* 2026-04-07 00:59:59.509968 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.509972 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.509975 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.509981 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.509985 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.509989 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.509992 | orchestrator | 2026-04-07 00:59:59.509996 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.510000 | orchestrator | Tuesday 07 April 2026 00:57:47 +0000 (0:00:00.567) 0:08:16.571 ********* 2026-04-07 00:59:59.510004 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510008 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510034 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510040 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510043 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510047 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510051 | orchestrator | 2026-04-07 00:59:59.510055 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.510059 | orchestrator | Tuesday 07 April 2026 00:57:49 +0000 (0:00:01.375) 0:08:17.947 ********* 2026-04-07 00:59:59.510062 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510066 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510070 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510074 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510077 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510081 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510085 | orchestrator | 2026-04-07 00:59:59.510089 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.510093 | orchestrator | Tuesday 07 April 2026 00:57:50 +0000 (0:00:01.108) 0:08:19.056 ********* 2026-04-07 00:59:59.510101 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510105 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510109 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510112 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510116 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510120 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510124 | orchestrator | 2026-04-07 00:59:59.510128 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.510131 | orchestrator | Tuesday 07 April 2026 00:57:51 +0000 (0:00:00.945) 0:08:20.002 ********* 2026-04-07 00:59:59.510135 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510139 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510143 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510147 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510150 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510154 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510158 | orchestrator | 2026-04-07 00:59:59.510162 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.510165 | orchestrator | Tuesday 07 April 2026 00:57:51 +0000 (0:00:00.551) 0:08:20.554 ********* 2026-04-07 00:59:59.510169 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510173 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510177 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510180 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510184 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510188 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510192 | orchestrator | 2026-04-07 00:59:59.510196 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.510199 | orchestrator | Tuesday 07 April 2026 00:57:52 +0000 (0:00:00.722) 0:08:21.276 ********* 2026-04-07 00:59:59.510203 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510207 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510211 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510221 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510232 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510238 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510244 | orchestrator | 2026-04-07 00:59:59.510256 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.510263 | orchestrator | Tuesday 07 April 2026 00:57:52 +0000 (0:00:00.560) 0:08:21.837 ********* 2026-04-07 00:59:59.510269 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510275 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510279 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510283 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510287 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510291 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510294 | orchestrator | 2026-04-07 00:59:59.510298 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.510302 | orchestrator | Tuesday 07 April 2026 00:57:53 +0000 (0:00:00.689) 0:08:22.526 ********* 2026-04-07 00:59:59.510308 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510312 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510316 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510319 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510323 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510327 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510330 | orchestrator | 2026-04-07 00:59:59.510334 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.510338 | orchestrator | Tuesday 07 April 2026 00:57:54 +0000 (0:00:00.506) 0:08:23.033 ********* 2026-04-07 00:59:59.510342 | orchestrator | skipping: [testbed-node-0] 2026-04-07 00:59:59.510345 | orchestrator | skipping: [testbed-node-1] 2026-04-07 00:59:59.510349 | orchestrator | skipping: [testbed-node-2] 2026-04-07 00:59:59.510353 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510357 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510360 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510364 | orchestrator | 2026-04-07 00:59:59.510368 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.510371 | orchestrator | Tuesday 07 April 2026 00:57:54 +0000 (0:00:00.650) 0:08:23.683 ********* 2026-04-07 00:59:59.510375 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510379 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510383 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510386 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510390 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510394 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510398 | orchestrator | 2026-04-07 00:59:59.510401 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.510405 | orchestrator | Tuesday 07 April 2026 00:57:55 +0000 (0:00:00.519) 0:08:24.203 ********* 2026-04-07 00:59:59.510409 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510412 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510419 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510423 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510426 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510430 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510434 | orchestrator | 2026-04-07 00:59:59.510438 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.510441 | orchestrator | Tuesday 07 April 2026 00:57:56 +0000 (0:00:00.694) 0:08:24.898 ********* 2026-04-07 00:59:59.510445 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510449 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510453 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510456 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510460 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510464 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510467 | orchestrator | 2026-04-07 00:59:59.510471 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-07 00:59:59.510475 | orchestrator | Tuesday 07 April 2026 00:57:57 +0000 (0:00:00.983) 0:08:25.881 ********* 2026-04-07 00:59:59.510482 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.510486 | orchestrator | 2026-04-07 00:59:59.510490 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-07 00:59:59.510493 | orchestrator | Tuesday 07 April 2026 00:58:00 +0000 (0:00:03.077) 0:08:28.959 ********* 2026-04-07 00:59:59.510497 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510501 | orchestrator | 2026-04-07 00:59:59.510504 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-07 00:59:59.510508 | orchestrator | Tuesday 07 April 2026 00:58:01 +0000 (0:00:01.842) 0:08:30.802 ********* 2026-04-07 00:59:59.510512 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510516 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.510519 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.510523 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.510527 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.510531 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.510534 | orchestrator | 2026-04-07 00:59:59.510538 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-07 00:59:59.510542 | orchestrator | Tuesday 07 April 2026 00:58:03 +0000 (0:00:01.654) 0:08:32.456 ********* 2026-04-07 00:59:59.510546 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.510549 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.510553 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.510557 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.510561 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.510569 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.510573 | orchestrator | 2026-04-07 00:59:59.510577 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-07 00:59:59.510580 | orchestrator | Tuesday 07 April 2026 00:58:04 +0000 (0:00:01.335) 0:08:33.791 ********* 2026-04-07 00:59:59.510584 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.510589 | orchestrator | 2026-04-07 00:59:59.510592 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-07 00:59:59.510596 | orchestrator | Tuesday 07 April 2026 00:58:06 +0000 (0:00:01.235) 0:08:35.027 ********* 2026-04-07 00:59:59.510600 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.510604 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.510607 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.510611 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.510615 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.510619 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.510622 | orchestrator | 2026-04-07 00:59:59.510626 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-07 00:59:59.510630 | orchestrator | Tuesday 07 April 2026 00:58:07 +0000 (0:00:01.466) 0:08:36.494 ********* 2026-04-07 00:59:59.510634 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.510637 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.510641 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.510645 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.510648 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.510652 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.510656 | orchestrator | 2026-04-07 00:59:59.510660 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-07 00:59:59.510664 | orchestrator | Tuesday 07 April 2026 00:58:11 +0000 (0:00:03.486) 0:08:39.981 ********* 2026-04-07 00:59:59.510669 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.510677 | orchestrator | 2026-04-07 00:59:59.510681 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-07 00:59:59.510685 | orchestrator | Tuesday 07 April 2026 00:58:12 +0000 (0:00:01.270) 0:08:41.251 ********* 2026-04-07 00:59:59.510692 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510696 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510700 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510704 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510707 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510711 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510715 | orchestrator | 2026-04-07 00:59:59.510719 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-07 00:59:59.510722 | orchestrator | Tuesday 07 April 2026 00:58:13 +0000 (0:00:00.623) 0:08:41.874 ********* 2026-04-07 00:59:59.510726 | orchestrator | changed: [testbed-node-0] 2026-04-07 00:59:59.510730 | orchestrator | changed: [testbed-node-1] 2026-04-07 00:59:59.510734 | orchestrator | changed: [testbed-node-2] 2026-04-07 00:59:59.510738 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.510741 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.510745 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.510750 | orchestrator | 2026-04-07 00:59:59.510756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-07 00:59:59.510763 | orchestrator | Tuesday 07 April 2026 00:58:15 +0000 (0:00:02.344) 0:08:44.219 ********* 2026-04-07 00:59:59.510769 | orchestrator | ok: [testbed-node-0] 2026-04-07 00:59:59.510775 | orchestrator | ok: [testbed-node-1] 2026-04-07 00:59:59.510781 | orchestrator | ok: [testbed-node-2] 2026-04-07 00:59:59.510787 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510796 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510802 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510808 | orchestrator | 2026-04-07 00:59:59.510815 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-07 00:59:59.510822 | orchestrator | 2026-04-07 00:59:59.510828 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.510835 | orchestrator | Tuesday 07 April 2026 00:58:16 +0000 (0:00:00.882) 0:08:45.101 ********* 2026-04-07 00:59:59.510840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.510844 | orchestrator | 2026-04-07 00:59:59.510848 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.510852 | orchestrator | Tuesday 07 April 2026 00:58:16 +0000 (0:00:00.442) 0:08:45.544 ********* 2026-04-07 00:59:59.510856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.510860 | orchestrator | 2026-04-07 00:59:59.510863 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.510867 | orchestrator | Tuesday 07 April 2026 00:58:17 +0000 (0:00:00.600) 0:08:46.144 ********* 2026-04-07 00:59:59.510871 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510875 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.510878 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.510893 | orchestrator | 2026-04-07 00:59:59.510899 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.510906 | orchestrator | Tuesday 07 April 2026 00:58:17 +0000 (0:00:00.267) 0:08:46.412 ********* 2026-04-07 00:59:59.510912 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510919 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510925 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510932 | orchestrator | 2026-04-07 00:59:59.510936 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.510940 | orchestrator | Tuesday 07 April 2026 00:58:18 +0000 (0:00:00.652) 0:08:47.064 ********* 2026-04-07 00:59:59.510943 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510947 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510951 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510954 | orchestrator | 2026-04-07 00:59:59.510958 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.510966 | orchestrator | Tuesday 07 April 2026 00:58:18 +0000 (0:00:00.656) 0:08:47.720 ********* 2026-04-07 00:59:59.510970 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.510973 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.510977 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.510981 | orchestrator | 2026-04-07 00:59:59.510985 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.510988 | orchestrator | Tuesday 07 April 2026 00:58:19 +0000 (0:00:00.824) 0:08:48.545 ********* 2026-04-07 00:59:59.510992 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.510996 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511000 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511003 | orchestrator | 2026-04-07 00:59:59.511007 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.511011 | orchestrator | Tuesday 07 April 2026 00:58:19 +0000 (0:00:00.306) 0:08:48.852 ********* 2026-04-07 00:59:59.511014 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511018 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511022 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511026 | orchestrator | 2026-04-07 00:59:59.511030 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.511039 | orchestrator | Tuesday 07 April 2026 00:58:20 +0000 (0:00:00.322) 0:08:49.174 ********* 2026-04-07 00:59:59.511043 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511047 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511051 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511054 | orchestrator | 2026-04-07 00:59:59.511058 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.511062 | orchestrator | Tuesday 07 April 2026 00:58:20 +0000 (0:00:00.272) 0:08:49.447 ********* 2026-04-07 00:59:59.511066 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511070 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511073 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511077 | orchestrator | 2026-04-07 00:59:59.511083 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.511087 | orchestrator | Tuesday 07 April 2026 00:58:21 +0000 (0:00:00.894) 0:08:50.341 ********* 2026-04-07 00:59:59.511091 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511095 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511098 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511102 | orchestrator | 2026-04-07 00:59:59.511106 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.511110 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:00.668) 0:08:51.009 ********* 2026-04-07 00:59:59.511113 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511117 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511121 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511125 | orchestrator | 2026-04-07 00:59:59.511128 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.511132 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:00.272) 0:08:51.282 ********* 2026-04-07 00:59:59.511136 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511140 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511143 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511147 | orchestrator | 2026-04-07 00:59:59.511151 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.511155 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:00.281) 0:08:51.563 ********* 2026-04-07 00:59:59.511163 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511167 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511171 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511175 | orchestrator | 2026-04-07 00:59:59.511179 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.511185 | orchestrator | Tuesday 07 April 2026 00:58:23 +0000 (0:00:00.507) 0:08:52.071 ********* 2026-04-07 00:59:59.511192 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511195 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511199 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511203 | orchestrator | 2026-04-07 00:59:59.511207 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.511211 | orchestrator | Tuesday 07 April 2026 00:58:23 +0000 (0:00:00.314) 0:08:52.386 ********* 2026-04-07 00:59:59.511214 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511247 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511251 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511254 | orchestrator | 2026-04-07 00:59:59.511258 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.511262 | orchestrator | Tuesday 07 April 2026 00:58:23 +0000 (0:00:00.356) 0:08:52.742 ********* 2026-04-07 00:59:59.511266 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511270 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511274 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511277 | orchestrator | 2026-04-07 00:59:59.511281 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.511285 | orchestrator | Tuesday 07 April 2026 00:58:24 +0000 (0:00:00.309) 0:08:53.052 ********* 2026-04-07 00:59:59.511289 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511292 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511296 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511300 | orchestrator | 2026-04-07 00:59:59.511304 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.511307 | orchestrator | Tuesday 07 April 2026 00:58:24 +0000 (0:00:00.646) 0:08:53.699 ********* 2026-04-07 00:59:59.511311 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511315 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511319 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511323 | orchestrator | 2026-04-07 00:59:59.511326 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.511330 | orchestrator | Tuesday 07 April 2026 00:58:25 +0000 (0:00:00.343) 0:08:54.043 ********* 2026-04-07 00:59:59.511334 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511338 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511341 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511345 | orchestrator | 2026-04-07 00:59:59.511349 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.511353 | orchestrator | Tuesday 07 April 2026 00:58:25 +0000 (0:00:00.334) 0:08:54.378 ********* 2026-04-07 00:59:59.511356 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511360 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511364 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511368 | orchestrator | 2026-04-07 00:59:59.511371 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-07 00:59:59.511375 | orchestrator | Tuesday 07 April 2026 00:58:26 +0000 (0:00:00.550) 0:08:54.929 ********* 2026-04-07 00:59:59.511379 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511383 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511386 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-07 00:59:59.511390 | orchestrator | 2026-04-07 00:59:59.511394 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-07 00:59:59.511398 | orchestrator | Tuesday 07 April 2026 00:58:26 +0000 (0:00:00.627) 0:08:55.557 ********* 2026-04-07 00:59:59.511402 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.511405 | orchestrator | 2026-04-07 00:59:59.511409 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-07 00:59:59.511413 | orchestrator | Tuesday 07 April 2026 00:58:28 +0000 (0:00:01.505) 0:08:57.062 ********* 2026-04-07 00:59:59.511418 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-07 00:59:59.511426 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511430 | orchestrator | 2026-04-07 00:59:59.511434 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-07 00:59:59.511437 | orchestrator | Tuesday 07 April 2026 00:58:28 +0000 (0:00:00.190) 0:08:57.253 ********* 2026-04-07 00:59:59.511444 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 00:59:59.511452 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 00:59:59.511455 | orchestrator | 2026-04-07 00:59:59.511459 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-07 00:59:59.511463 | orchestrator | Tuesday 07 April 2026 00:58:33 +0000 (0:00:04.818) 0:09:02.071 ********* 2026-04-07 00:59:59.511467 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 00:59:59.511471 | orchestrator | 2026-04-07 00:59:59.511474 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-07 00:59:59.511478 | orchestrator | Tuesday 07 April 2026 00:58:35 +0000 (0:00:02.636) 0:09:04.707 ********* 2026-04-07 00:59:59.511482 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.511486 | orchestrator | 2026-04-07 00:59:59.511489 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-07 00:59:59.511496 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:00.773) 0:09:05.481 ********* 2026-04-07 00:59:59.511499 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 00:59:59.511503 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 00:59:59.511507 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 00:59:59.511511 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-07 00:59:59.511514 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-07 00:59:59.511518 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-07 00:59:59.511522 | orchestrator | 2026-04-07 00:59:59.511526 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-07 00:59:59.511529 | orchestrator | Tuesday 07 April 2026 00:58:37 +0000 (0:00:01.063) 0:09:06.545 ********* 2026-04-07 00:59:59.511533 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.511537 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.511541 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.511545 | orchestrator | 2026-04-07 00:59:59.511548 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-07 00:59:59.511552 | orchestrator | Tuesday 07 April 2026 00:58:39 +0000 (0:00:01.692) 0:09:08.238 ********* 2026-04-07 00:59:59.511556 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 00:59:59.511560 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.511564 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511567 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 00:59:59.511571 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 00:59:59.511575 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511579 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 00:59:59.511582 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 00:59:59.511590 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511596 | orchestrator | 2026-04-07 00:59:59.511605 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-07 00:59:59.511614 | orchestrator | Tuesday 07 April 2026 00:58:40 +0000 (0:00:01.342) 0:09:09.580 ********* 2026-04-07 00:59:59.511620 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511626 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511632 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511637 | orchestrator | 2026-04-07 00:59:59.511643 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-07 00:59:59.511649 | orchestrator | Tuesday 07 April 2026 00:58:43 +0000 (0:00:02.494) 0:09:12.074 ********* 2026-04-07 00:59:59.511655 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511661 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.511668 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.511674 | orchestrator | 2026-04-07 00:59:59.511681 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-07 00:59:59.511687 | orchestrator | Tuesday 07 April 2026 00:58:43 +0000 (0:00:00.320) 0:09:12.395 ********* 2026-04-07 00:59:59.511693 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-04-07 00:59:59.511699 | orchestrator | 2026-04-07 00:59:59.511706 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-07 00:59:59.511710 | orchestrator | Tuesday 07 April 2026 00:58:44 +0000 (0:00:00.549) 0:09:12.944 ********* 2026-04-07 00:59:59.511714 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.511718 | orchestrator | 2026-04-07 00:59:59.511722 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-07 00:59:59.511725 | orchestrator | Tuesday 07 April 2026 00:58:44 +0000 (0:00:00.840) 0:09:13.785 ********* 2026-04-07 00:59:59.511729 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511733 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511736 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511740 | orchestrator | 2026-04-07 00:59:59.511747 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-07 00:59:59.511751 | orchestrator | Tuesday 07 April 2026 00:58:46 +0000 (0:00:01.604) 0:09:15.389 ********* 2026-04-07 00:59:59.511754 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511758 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511762 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511765 | orchestrator | 2026-04-07 00:59:59.511769 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-07 00:59:59.511774 | orchestrator | Tuesday 07 April 2026 00:58:47 +0000 (0:00:01.310) 0:09:16.700 ********* 2026-04-07 00:59:59.511780 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511786 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511792 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511798 | orchestrator | 2026-04-07 00:59:59.511804 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-07 00:59:59.511811 | orchestrator | Tuesday 07 April 2026 00:58:50 +0000 (0:00:02.193) 0:09:18.894 ********* 2026-04-07 00:59:59.511817 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511823 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511828 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511832 | orchestrator | 2026-04-07 00:59:59.511836 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-07 00:59:59.511840 | orchestrator | Tuesday 07 April 2026 00:58:51 +0000 (0:00:01.962) 0:09:20.856 ********* 2026-04-07 00:59:59.511843 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511847 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511851 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511854 | orchestrator | 2026-04-07 00:59:59.511858 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.511867 | orchestrator | Tuesday 07 April 2026 00:58:53 +0000 (0:00:01.662) 0:09:22.518 ********* 2026-04-07 00:59:59.511875 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511879 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511882 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511886 | orchestrator | 2026-04-07 00:59:59.511890 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-07 00:59:59.511894 | orchestrator | Tuesday 07 April 2026 00:58:54 +0000 (0:00:00.715) 0:09:23.234 ********* 2026-04-07 00:59:59.511897 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.511901 | orchestrator | 2026-04-07 00:59:59.511905 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-07 00:59:59.511908 | orchestrator | Tuesday 07 April 2026 00:58:55 +0000 (0:00:00.650) 0:09:23.885 ********* 2026-04-07 00:59:59.511912 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511916 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511919 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511923 | orchestrator | 2026-04-07 00:59:59.511927 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-07 00:59:59.511931 | orchestrator | Tuesday 07 April 2026 00:58:55 +0000 (0:00:00.644) 0:09:24.529 ********* 2026-04-07 00:59:59.511934 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.511938 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.511942 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.511946 | orchestrator | 2026-04-07 00:59:59.511949 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-07 00:59:59.511953 | orchestrator | Tuesday 07 April 2026 00:58:56 +0000 (0:00:01.138) 0:09:25.667 ********* 2026-04-07 00:59:59.511957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.511960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.511964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.511968 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.511972 | orchestrator | 2026-04-07 00:59:59.511975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-07 00:59:59.511979 | orchestrator | Tuesday 07 April 2026 00:58:57 +0000 (0:00:00.641) 0:09:26.309 ********* 2026-04-07 00:59:59.511983 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.511986 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.511990 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.511994 | orchestrator | 2026-04-07 00:59:59.511998 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-07 00:59:59.512001 | orchestrator | 2026-04-07 00:59:59.512005 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 00:59:59.512009 | orchestrator | Tuesday 07 April 2026 00:58:57 +0000 (0:00:00.543) 0:09:26.853 ********* 2026-04-07 00:59:59.512012 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512016 | orchestrator | 2026-04-07 00:59:59.512020 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 00:59:59.512024 | orchestrator | Tuesday 07 April 2026 00:58:58 +0000 (0:00:00.764) 0:09:27.617 ********* 2026-04-07 00:59:59.512028 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512031 | orchestrator | 2026-04-07 00:59:59.512035 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 00:59:59.512039 | orchestrator | Tuesday 07 April 2026 00:58:59 +0000 (0:00:00.505) 0:09:28.123 ********* 2026-04-07 00:59:59.512043 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512046 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512050 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512056 | orchestrator | 2026-04-07 00:59:59.512060 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 00:59:59.512064 | orchestrator | Tuesday 07 April 2026 00:58:59 +0000 (0:00:00.581) 0:09:28.704 ********* 2026-04-07 00:59:59.512068 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512071 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512075 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512079 | orchestrator | 2026-04-07 00:59:59.512082 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 00:59:59.512089 | orchestrator | Tuesday 07 April 2026 00:59:01 +0000 (0:00:01.727) 0:09:30.431 ********* 2026-04-07 00:59:59.512092 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512096 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512100 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512104 | orchestrator | 2026-04-07 00:59:59.512107 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 00:59:59.512111 | orchestrator | Tuesday 07 April 2026 00:59:02 +0000 (0:00:00.780) 0:09:31.212 ********* 2026-04-07 00:59:59.512115 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512118 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512124 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512130 | orchestrator | 2026-04-07 00:59:59.512137 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 00:59:59.512143 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:00.739) 0:09:31.952 ********* 2026-04-07 00:59:59.512148 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512154 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512161 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512167 | orchestrator | 2026-04-07 00:59:59.512174 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 00:59:59.512180 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:00.595) 0:09:32.547 ********* 2026-04-07 00:59:59.512187 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512193 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512199 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512203 | orchestrator | 2026-04-07 00:59:59.512206 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 00:59:59.512210 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:00.298) 0:09:32.845 ********* 2026-04-07 00:59:59.512214 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512232 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512239 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512243 | orchestrator | 2026-04-07 00:59:59.512247 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 00:59:59.512250 | orchestrator | Tuesday 07 April 2026 00:59:04 +0000 (0:00:00.302) 0:09:33.148 ********* 2026-04-07 00:59:59.512254 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512258 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512262 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512265 | orchestrator | 2026-04-07 00:59:59.512269 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 00:59:59.512273 | orchestrator | Tuesday 07 April 2026 00:59:05 +0000 (0:00:00.792) 0:09:33.940 ********* 2026-04-07 00:59:59.512277 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512281 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512284 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512288 | orchestrator | 2026-04-07 00:59:59.512292 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 00:59:59.512296 | orchestrator | Tuesday 07 April 2026 00:59:06 +0000 (0:00:01.055) 0:09:34.996 ********* 2026-04-07 00:59:59.512299 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512303 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512307 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512310 | orchestrator | 2026-04-07 00:59:59.512314 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 00:59:59.512322 | orchestrator | Tuesday 07 April 2026 00:59:06 +0000 (0:00:00.288) 0:09:35.285 ********* 2026-04-07 00:59:59.512325 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512329 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512333 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512337 | orchestrator | 2026-04-07 00:59:59.512341 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 00:59:59.512344 | orchestrator | Tuesday 07 April 2026 00:59:06 +0000 (0:00:00.314) 0:09:35.599 ********* 2026-04-07 00:59:59.512348 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512352 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512356 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512359 | orchestrator | 2026-04-07 00:59:59.512363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 00:59:59.512367 | orchestrator | Tuesday 07 April 2026 00:59:07 +0000 (0:00:00.316) 0:09:35.916 ********* 2026-04-07 00:59:59.512371 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512374 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512378 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512382 | orchestrator | 2026-04-07 00:59:59.512385 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 00:59:59.512389 | orchestrator | Tuesday 07 April 2026 00:59:07 +0000 (0:00:00.595) 0:09:36.512 ********* 2026-04-07 00:59:59.512393 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512397 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512400 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512404 | orchestrator | 2026-04-07 00:59:59.512408 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 00:59:59.512412 | orchestrator | Tuesday 07 April 2026 00:59:07 +0000 (0:00:00.343) 0:09:36.856 ********* 2026-04-07 00:59:59.512415 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512419 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512423 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512427 | orchestrator | 2026-04-07 00:59:59.512430 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 00:59:59.512434 | orchestrator | Tuesday 07 April 2026 00:59:08 +0000 (0:00:00.322) 0:09:37.179 ********* 2026-04-07 00:59:59.512438 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512442 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512445 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512449 | orchestrator | 2026-04-07 00:59:59.512453 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 00:59:59.512457 | orchestrator | Tuesday 07 April 2026 00:59:08 +0000 (0:00:00.285) 0:09:37.464 ********* 2026-04-07 00:59:59.512460 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512464 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512468 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512472 | orchestrator | 2026-04-07 00:59:59.512475 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 00:59:59.512479 | orchestrator | Tuesday 07 April 2026 00:59:09 +0000 (0:00:00.582) 0:09:38.047 ********* 2026-04-07 00:59:59.512486 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512489 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512493 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512497 | orchestrator | 2026-04-07 00:59:59.512501 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 00:59:59.512505 | orchestrator | Tuesday 07 April 2026 00:59:09 +0000 (0:00:00.347) 0:09:38.394 ********* 2026-04-07 00:59:59.512508 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.512512 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.512516 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.512519 | orchestrator | 2026-04-07 00:59:59.512523 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-07 00:59:59.512527 | orchestrator | Tuesday 07 April 2026 00:59:10 +0000 (0:00:00.513) 0:09:38.908 ********* 2026-04-07 00:59:59.512533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512537 | orchestrator | 2026-04-07 00:59:59.512541 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-07 00:59:59.512545 | orchestrator | Tuesday 07 April 2026 00:59:10 +0000 (0:00:00.650) 0:09:39.558 ********* 2026-04-07 00:59:59.512549 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512552 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.512556 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.512560 | orchestrator | 2026-04-07 00:59:59.512564 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-07 00:59:59.512568 | orchestrator | Tuesday 07 April 2026 00:59:12 +0000 (0:00:01.548) 0:09:41.106 ********* 2026-04-07 00:59:59.512572 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 00:59:59.512577 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 00:59:59.512581 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.512585 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 00:59:59.512589 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 00:59:59.512593 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.512596 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 00:59:59.512600 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 00:59:59.512604 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.512608 | orchestrator | 2026-04-07 00:59:59.512611 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-07 00:59:59.512615 | orchestrator | Tuesday 07 April 2026 00:59:13 +0000 (0:00:01.149) 0:09:42.256 ********* 2026-04-07 00:59:59.512619 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512623 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512627 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512630 | orchestrator | 2026-04-07 00:59:59.512634 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-07 00:59:59.512638 | orchestrator | Tuesday 07 April 2026 00:59:13 +0000 (0:00:00.263) 0:09:42.520 ********* 2026-04-07 00:59:59.512642 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512645 | orchestrator | 2026-04-07 00:59:59.512649 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-07 00:59:59.512653 | orchestrator | Tuesday 07 April 2026 00:59:14 +0000 (0:00:00.653) 0:09:43.173 ********* 2026-04-07 00:59:59.512657 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.512661 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.512665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.512669 | orchestrator | 2026-04-07 00:59:59.512673 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-07 00:59:59.512676 | orchestrator | Tuesday 07 April 2026 00:59:14 +0000 (0:00:00.685) 0:09:43.859 ********* 2026-04-07 00:59:59.512680 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512684 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 00:59:59.512688 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512692 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 00:59:59.512704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512708 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 00:59:59.512712 | orchestrator | 2026-04-07 00:59:59.512715 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-07 00:59:59.512719 | orchestrator | Tuesday 07 April 2026 00:59:18 +0000 (0:00:03.253) 0:09:47.112 ********* 2026-04-07 00:59:59.512723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512726 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.512730 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512736 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.512740 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 00:59:59.512744 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 00:59:59.512747 | orchestrator | 2026-04-07 00:59:59.512751 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-07 00:59:59.512755 | orchestrator | Tuesday 07 April 2026 00:59:20 +0000 (0:00:02.178) 0:09:49.291 ********* 2026-04-07 00:59:59.512759 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 00:59:59.512762 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.512766 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 00:59:59.512770 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.512774 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 00:59:59.512777 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.512781 | orchestrator | 2026-04-07 00:59:59.512785 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-07 00:59:59.512789 | orchestrator | Tuesday 07 April 2026 00:59:21 +0000 (0:00:01.301) 0:09:50.592 ********* 2026-04-07 00:59:59.512792 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-07 00:59:59.512796 | orchestrator | 2026-04-07 00:59:59.512800 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-07 00:59:59.512804 | orchestrator | Tuesday 07 April 2026 00:59:21 +0000 (0:00:00.226) 0:09:50.819 ********* 2026-04-07 00:59:59.512807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512833 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512840 | orchestrator | 2026-04-07 00:59:59.512846 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-07 00:59:59.512853 | orchestrator | Tuesday 07 April 2026 00:59:22 +0000 (0:00:00.566) 0:09:51.386 ********* 2026-04-07 00:59:59.512857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 00:59:59.512878 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512882 | orchestrator | 2026-04-07 00:59:59.512886 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-07 00:59:59.512890 | orchestrator | Tuesday 07 April 2026 00:59:23 +0000 (0:00:00.905) 0:09:52.291 ********* 2026-04-07 00:59:59.512893 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 00:59:59.512897 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 00:59:59.512901 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 00:59:59.512905 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 00:59:59.512909 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 00:59:59.512912 | orchestrator | 2026-04-07 00:59:59.512916 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-07 00:59:59.512920 | orchestrator | Tuesday 07 April 2026 00:59:44 +0000 (0:00:21.289) 0:10:13.581 ********* 2026-04-07 00:59:59.512924 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512927 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512931 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512935 | orchestrator | 2026-04-07 00:59:59.512938 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-07 00:59:59.512942 | orchestrator | Tuesday 07 April 2026 00:59:45 +0000 (0:00:00.548) 0:10:14.129 ********* 2026-04-07 00:59:59.512948 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.512952 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.512956 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.512959 | orchestrator | 2026-04-07 00:59:59.512963 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-07 00:59:59.512967 | orchestrator | Tuesday 07 April 2026 00:59:45 +0000 (0:00:00.314) 0:10:14.444 ********* 2026-04-07 00:59:59.512970 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512974 | orchestrator | 2026-04-07 00:59:59.512978 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-07 00:59:59.512982 | orchestrator | Tuesday 07 April 2026 00:59:46 +0000 (0:00:00.576) 0:10:15.020 ********* 2026-04-07 00:59:59.512985 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.512989 | orchestrator | 2026-04-07 00:59:59.512993 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-07 00:59:59.512997 | orchestrator | Tuesday 07 April 2026 00:59:46 +0000 (0:00:00.737) 0:10:15.757 ********* 2026-04-07 00:59:59.513000 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.513004 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.513008 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.513012 | orchestrator | 2026-04-07 00:59:59.513015 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-07 00:59:59.513019 | orchestrator | Tuesday 07 April 2026 00:59:48 +0000 (0:00:01.182) 0:10:16.940 ********* 2026-04-07 00:59:59.513026 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.513030 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.513034 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.513038 | orchestrator | 2026-04-07 00:59:59.513044 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-07 00:59:59.513048 | orchestrator | Tuesday 07 April 2026 00:59:49 +0000 (0:00:01.068) 0:10:18.009 ********* 2026-04-07 00:59:59.513051 | orchestrator | changed: [testbed-node-3] 2026-04-07 00:59:59.513060 | orchestrator | changed: [testbed-node-5] 2026-04-07 00:59:59.513064 | orchestrator | changed: [testbed-node-4] 2026-04-07 00:59:59.513068 | orchestrator | 2026-04-07 00:59:59.513072 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-07 00:59:59.513076 | orchestrator | Tuesday 07 April 2026 00:59:51 +0000 (0:00:02.101) 0:10:20.111 ********* 2026-04-07 00:59:59.513079 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.513083 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.513087 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 00:59:59.513091 | orchestrator | 2026-04-07 00:59:59.513094 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 00:59:59.513098 | orchestrator | Tuesday 07 April 2026 00:59:53 +0000 (0:00:02.664) 0:10:22.775 ********* 2026-04-07 00:59:59.513102 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.513106 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.513109 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.513113 | orchestrator | 2026-04-07 00:59:59.513117 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-07 00:59:59.513121 | orchestrator | Tuesday 07 April 2026 00:59:54 +0000 (0:00:00.476) 0:10:23.252 ********* 2026-04-07 00:59:59.513124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 00:59:59.513128 | orchestrator | 2026-04-07 00:59:59.513132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-07 00:59:59.513136 | orchestrator | Tuesday 07 April 2026 00:59:54 +0000 (0:00:00.485) 0:10:23.738 ********* 2026-04-07 00:59:59.513139 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.513143 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.513147 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.513151 | orchestrator | 2026-04-07 00:59:59.513154 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-07 00:59:59.513158 | orchestrator | Tuesday 07 April 2026 00:59:55 +0000 (0:00:00.301) 0:10:24.039 ********* 2026-04-07 00:59:59.513162 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.513166 | orchestrator | skipping: [testbed-node-4] 2026-04-07 00:59:59.513169 | orchestrator | skipping: [testbed-node-5] 2026-04-07 00:59:59.513173 | orchestrator | 2026-04-07 00:59:59.513177 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-07 00:59:59.513181 | orchestrator | Tuesday 07 April 2026 00:59:55 +0000 (0:00:00.481) 0:10:24.521 ********* 2026-04-07 00:59:59.513184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 00:59:59.513188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 00:59:59.513192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 00:59:59.513196 | orchestrator | skipping: [testbed-node-3] 2026-04-07 00:59:59.513199 | orchestrator | 2026-04-07 00:59:59.513203 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-07 00:59:59.513207 | orchestrator | Tuesday 07 April 2026 00:59:56 +0000 (0:00:00.572) 0:10:25.094 ********* 2026-04-07 00:59:59.513211 | orchestrator | ok: [testbed-node-3] 2026-04-07 00:59:59.513214 | orchestrator | ok: [testbed-node-4] 2026-04-07 00:59:59.513233 | orchestrator | ok: [testbed-node-5] 2026-04-07 00:59:59.513237 | orchestrator | 2026-04-07 00:59:59.513240 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 00:59:59.513244 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-07 00:59:59.513251 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-07 00:59:59.513255 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-07 00:59:59.513258 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-07 00:59:59.513262 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-07 00:59:59.513266 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-07 00:59:59.513270 | orchestrator | 2026-04-07 00:59:59.513273 | orchestrator | 2026-04-07 00:59:59.513277 | orchestrator | 2026-04-07 00:59:59.513281 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 00:59:59.513285 | orchestrator | Tuesday 07 April 2026 00:59:56 +0000 (0:00:00.210) 0:10:25.304 ********* 2026-04-07 00:59:59.513289 | orchestrator | =============================================================================== 2026-04-07 00:59:59.513292 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.20s 2026-04-07 00:59:59.513296 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.32s 2026-04-07 00:59:59.513302 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.32s 2026-04-07 00:59:59.513306 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 21.29s 2026-04-07 00:59:59.513310 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.35s 2026-04-07 00:59:59.513314 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.43s 2026-04-07 00:59:59.513317 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.12s 2026-04-07 00:59:59.513321 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.83s 2026-04-07 00:59:59.513325 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.23s 2026-04-07 00:59:59.513333 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.23s 2026-04-07 00:59:59.513337 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.95s 2026-04-07 00:59:59.513341 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 4.82s 2026-04-07 00:59:59.513345 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.41s 2026-04-07 00:59:59.513349 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.21s 2026-04-07 00:59:59.513352 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.68s 2026-04-07 00:59:59.513356 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.54s 2026-04-07 00:59:59.513360 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.49s 2026-04-07 00:59:59.513363 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.25s 2026-04-07 00:59:59.513367 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.24s 2026-04-07 00:59:59.513371 | orchestrator | ceph-mon : Generate systemd unit file for mon container ----------------- 3.13s 2026-04-07 00:59:59.513375 | orchestrator | 2026-04-07 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:02.547751 | orchestrator | 2026-04-07 01:00:02 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:02.548856 | orchestrator | 2026-04-07 01:00:02 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:02.550419 | orchestrator | 2026-04-07 01:00:02 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:02.550642 | orchestrator | 2026-04-07 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:05.605213 | orchestrator | 2026-04-07 01:00:05 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:05.606215 | orchestrator | 2026-04-07 01:00:05 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:05.607420 | orchestrator | 2026-04-07 01:00:05 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:05.607488 | orchestrator | 2026-04-07 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:08.659301 | orchestrator | 2026-04-07 01:00:08 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:08.661718 | orchestrator | 2026-04-07 01:00:08 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:08.664073 | orchestrator | 2026-04-07 01:00:08 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:08.664123 | orchestrator | 2026-04-07 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:11.723566 | orchestrator | 2026-04-07 01:00:11 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:11.725443 | orchestrator | 2026-04-07 01:00:11 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:11.729163 | orchestrator | 2026-04-07 01:00:11 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:11.729203 | orchestrator | 2026-04-07 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:14.764840 | orchestrator | 2026-04-07 01:00:14 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:14.765741 | orchestrator | 2026-04-07 01:00:14 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:14.766503 | orchestrator | 2026-04-07 01:00:14 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:14.766587 | orchestrator | 2026-04-07 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:17.807357 | orchestrator | 2026-04-07 01:00:17 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:17.809163 | orchestrator | 2026-04-07 01:00:17 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:17.813087 | orchestrator | 2026-04-07 01:00:17 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:17.813137 | orchestrator | 2026-04-07 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:20.855845 | orchestrator | 2026-04-07 01:00:20 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:20.857520 | orchestrator | 2026-04-07 01:00:20 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:20.858347 | orchestrator | 2026-04-07 01:00:20 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:20.858367 | orchestrator | 2026-04-07 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:23.889715 | orchestrator | 2026-04-07 01:00:23 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:23.891362 | orchestrator | 2026-04-07 01:00:23 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:23.892973 | orchestrator | 2026-04-07 01:00:23 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:23.893150 | orchestrator | 2026-04-07 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:26.942492 | orchestrator | 2026-04-07 01:00:26 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:26.944675 | orchestrator | 2026-04-07 01:00:26 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:26.947190 | orchestrator | 2026-04-07 01:00:26 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:26.947285 | orchestrator | 2026-04-07 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:30.007869 | orchestrator | 2026-04-07 01:00:30 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:30.010312 | orchestrator | 2026-04-07 01:00:30 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:30.013093 | orchestrator | 2026-04-07 01:00:30 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:30.013164 | orchestrator | 2026-04-07 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:33.063113 | orchestrator | 2026-04-07 01:00:33 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:33.065022 | orchestrator | 2026-04-07 01:00:33 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:33.066227 | orchestrator | 2026-04-07 01:00:33 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:33.066464 | orchestrator | 2026-04-07 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:36.109373 | orchestrator | 2026-04-07 01:00:36 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:36.111421 | orchestrator | 2026-04-07 01:00:36 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:36.114534 | orchestrator | 2026-04-07 01:00:36 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:36.114593 | orchestrator | 2026-04-07 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:39.160854 | orchestrator | 2026-04-07 01:00:39 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:39.162605 | orchestrator | 2026-04-07 01:00:39 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:39.165049 | orchestrator | 2026-04-07 01:00:39 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:39.165112 | orchestrator | 2026-04-07 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:42.205300 | orchestrator | 2026-04-07 01:00:42 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:42.205969 | orchestrator | 2026-04-07 01:00:42 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:42.207460 | orchestrator | 2026-04-07 01:00:42 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state STARTED 2026-04-07 01:00:42.207512 | orchestrator | 2026-04-07 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:45.262931 | orchestrator | 2026-04-07 01:00:45 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:45.263009 | orchestrator | 2026-04-07 01:00:45 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:45.265707 | orchestrator | 2026-04-07 01:00:45 | INFO  | Task 4f8f5ca4-07a5-454c-aee3-3aa55745164a is in state SUCCESS 2026-04-07 01:00:45.266869 | orchestrator | 2026-04-07 01:00:45.266901 | orchestrator | 2026-04-07 01:00:45.266906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:00:45.266912 | orchestrator | 2026-04-07 01:00:45.266918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:00:45.266924 | orchestrator | Tuesday 07 April 2026 00:58:17 +0000 (0:00:00.286) 0:00:00.286 ********* 2026-04-07 01:00:45.266931 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:00:45.266941 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:00:45.266949 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:00:45.266956 | orchestrator | 2026-04-07 01:00:45.266962 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:00:45.266968 | orchestrator | Tuesday 07 April 2026 00:58:18 +0000 (0:00:00.256) 0:00:00.542 ********* 2026-04-07 01:00:45.266975 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-07 01:00:45.266982 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-07 01:00:45.266989 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-07 01:00:45.266995 | orchestrator | 2026-04-07 01:00:45.267002 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-07 01:00:45.267008 | orchestrator | 2026-04-07 01:00:45.267014 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 01:00:45.267020 | orchestrator | Tuesday 07 April 2026 00:58:18 +0000 (0:00:00.252) 0:00:00.795 ********* 2026-04-07 01:00:45.267027 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:00:45.267033 | orchestrator | 2026-04-07 01:00:45.267039 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-07 01:00:45.267046 | orchestrator | Tuesday 07 April 2026 00:58:19 +0000 (0:00:00.556) 0:00:01.352 ********* 2026-04-07 01:00:45.267052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:00:45.267062 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:00:45.267068 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:00:45.267074 | orchestrator | 2026-04-07 01:00:45.267080 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-07 01:00:45.267087 | orchestrator | Tuesday 07 April 2026 00:58:21 +0000 (0:00:01.999) 0:00:03.351 ********* 2026-04-07 01:00:45.267096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267193 | orchestrator | 2026-04-07 01:00:45.267197 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 01:00:45.267201 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:01.472) 0:00:04.824 ********* 2026-04-07 01:00:45.267205 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:00:45.267209 | orchestrator | 2026-04-07 01:00:45.267212 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-07 01:00:45.267216 | orchestrator | Tuesday 07 April 2026 00:58:23 +0000 (0:00:00.553) 0:00:05.377 ********* 2026-04-07 01:00:45.267226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267357 | orchestrator | 2026-04-07 01:00:45.267363 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-07 01:00:45.267367 | orchestrator | Tuesday 07 April 2026 00:58:26 +0000 (0:00:03.424) 0:00:08.802 ********* 2026-04-07 01:00:45.267372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267387 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:00:45.267391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267403 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:00:45.267407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267421 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:00:45.267424 | orchestrator | 2026-04-07 01:00:45.267428 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-07 01:00:45.267432 | orchestrator | Tuesday 07 April 2026 00:58:27 +0000 (0:00:00.689) 0:00:09.491 ********* 2026-04-07 01:00:45.267436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267448 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:00:45.267452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267470 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:00:45.267474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 01:00:45.267482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 01:00:45.267486 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:00:45.267490 | orchestrator | 2026-04-07 01:00:45.267494 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-07 01:00:45.267498 | orchestrator | Tuesday 07 April 2026 00:58:27 +0000 (0:00:00.827) 0:00:10.319 ********* 2026-04-07 01:00:45.267501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267541 | orchestrator | 2026-04-07 01:00:45.267545 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-07 01:00:45.267549 | orchestrator | Tuesday 07 April 2026 00:58:30 +0000 (0:00:02.649) 0:00:12.969 ********* 2026-04-07 01:00:45.267553 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267557 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:00:45.267560 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:00:45.267564 | orchestrator | 2026-04-07 01:00:45.267570 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-07 01:00:45.267574 | orchestrator | Tuesday 07 April 2026 00:58:32 +0000 (0:00:01.980) 0:00:14.949 ********* 2026-04-07 01:00:45.267578 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267581 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:00:45.267585 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:00:45.267589 | orchestrator | 2026-04-07 01:00:45.267593 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-07 01:00:45.267596 | orchestrator | Tuesday 07 April 2026 00:58:34 +0000 (0:00:01.818) 0:00:16.767 ********* 2026-04-07 01:00:45.267600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 01:00:45.267619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 01:00:45.267639 | orchestrator | 2026-04-07 01:00:45.267643 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 01:00:45.267646 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:02.006) 0:00:18.774 ********* 2026-04-07 01:00:45.267654 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:00:45.267658 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:00:45.267662 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:00:45.267665 | orchestrator | 2026-04-07 01:00:45.267669 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 01:00:45.267673 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:00.468) 0:00:19.242 ********* 2026-04-07 01:00:45.267676 | orchestrator | 2026-04-07 01:00:45.267680 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 01:00:45.267684 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:00.061) 0:00:19.303 ********* 2026-04-07 01:00:45.267688 | orchestrator | 2026-04-07 01:00:45.267691 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 01:00:45.267695 | orchestrator | Tuesday 07 April 2026 00:58:37 +0000 (0:00:00.063) 0:00:19.367 ********* 2026-04-07 01:00:45.267699 | orchestrator | 2026-04-07 01:00:45.267702 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-07 01:00:45.267706 | orchestrator | Tuesday 07 April 2026 00:58:37 +0000 (0:00:00.066) 0:00:19.433 ********* 2026-04-07 01:00:45.267710 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:00:45.267714 | orchestrator | 2026-04-07 01:00:45.267717 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-07 01:00:45.267721 | orchestrator | Tuesday 07 April 2026 00:58:37 +0000 (0:00:00.217) 0:00:19.651 ********* 2026-04-07 01:00:45.267725 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:00:45.267728 | orchestrator | 2026-04-07 01:00:45.267732 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-07 01:00:45.267736 | orchestrator | Tuesday 07 April 2026 00:58:37 +0000 (0:00:00.231) 0:00:19.883 ********* 2026-04-07 01:00:45.267740 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267743 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:00:45.267747 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:00:45.267751 | orchestrator | 2026-04-07 01:00:45.267755 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-07 01:00:45.267759 | orchestrator | Tuesday 07 April 2026 00:59:28 +0000 (0:00:50.877) 0:01:10.761 ********* 2026-04-07 01:00:45.267762 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267766 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:00:45.267770 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:00:45.267773 | orchestrator | 2026-04-07 01:00:45.267777 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 01:00:45.267781 | orchestrator | Tuesday 07 April 2026 01:00:32 +0000 (0:01:04.291) 0:02:15.053 ********* 2026-04-07 01:00:45.267785 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:00:45.267788 | orchestrator | 2026-04-07 01:00:45.267792 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-07 01:00:45.267796 | orchestrator | Tuesday 07 April 2026 01:00:33 +0000 (0:00:00.550) 0:02:15.604 ********* 2026-04-07 01:00:45.267800 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:00:45.267804 | orchestrator | 2026-04-07 01:00:45.267808 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-07 01:00:45.267812 | orchestrator | Tuesday 07 April 2026 01:00:35 +0000 (0:00:01.955) 0:02:17.559 ********* 2026-04-07 01:00:45.267815 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:00:45.267819 | orchestrator | 2026-04-07 01:00:45.267823 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-07 01:00:45.267826 | orchestrator | Tuesday 07 April 2026 01:00:36 +0000 (0:00:01.763) 0:02:19.323 ********* 2026-04-07 01:00:45.267840 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:00:45.267844 | orchestrator | 2026-04-07 01:00:45.267848 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-07 01:00:45.267858 | orchestrator | Tuesday 07 April 2026 01:00:38 +0000 (0:00:01.899) 0:02:21.223 ********* 2026-04-07 01:00:45.267866 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267870 | orchestrator | 2026-04-07 01:00:45.267923 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-07 01:00:45.267934 | orchestrator | Tuesday 07 April 2026 01:00:41 +0000 (0:00:02.382) 0:02:23.605 ********* 2026-04-07 01:00:45.267938 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:00:45.267942 | orchestrator | 2026-04-07 01:00:45.267945 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:00:45.267950 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:00:45.267964 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 01:00:45.267971 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 01:00:45.267975 | orchestrator | 2026-04-07 01:00:45.267979 | orchestrator | 2026-04-07 01:00:45.267987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:00:45.267991 | orchestrator | Tuesday 07 April 2026 01:00:43 +0000 (0:00:02.522) 0:02:26.128 ********* 2026-04-07 01:00:45.267995 | orchestrator | =============================================================================== 2026-04-07 01:00:45.267999 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 64.29s 2026-04-07 01:00:45.268003 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.88s 2026-04-07 01:00:45.268006 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.42s 2026-04-07 01:00:45.268010 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.65s 2026-04-07 01:00:45.268014 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.52s 2026-04-07 01:00:45.268017 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.38s 2026-04-07 01:00:45.268021 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.01s 2026-04-07 01:00:45.268025 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.00s 2026-04-07 01:00:45.268029 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 1.98s 2026-04-07 01:00:45.268032 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 1.96s 2026-04-07 01:00:45.268036 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 1.90s 2026-04-07 01:00:45.268040 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2026-04-07 01:00:45.268044 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 1.76s 2026-04-07 01:00:45.268047 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.47s 2026-04-07 01:00:45.268051 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.83s 2026-04-07 01:00:45.268055 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.69s 2026-04-07 01:00:45.268058 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-07 01:00:45.268062 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-04-07 01:00:45.268066 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-04-07 01:00:45.268070 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-04-07 01:00:45.268073 | orchestrator | 2026-04-07 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:48.327030 | orchestrator | 2026-04-07 01:00:48 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:48.329292 | orchestrator | 2026-04-07 01:00:48 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:48.329374 | orchestrator | 2026-04-07 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:51.371078 | orchestrator | 2026-04-07 01:00:51 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:51.372869 | orchestrator | 2026-04-07 01:00:51 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:51.372923 | orchestrator | 2026-04-07 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:54.442054 | orchestrator | 2026-04-07 01:00:54 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:54.442997 | orchestrator | 2026-04-07 01:00:54 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:54.443026 | orchestrator | 2026-04-07 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:00:57.493864 | orchestrator | 2026-04-07 01:00:57 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:00:57.496558 | orchestrator | 2026-04-07 01:00:57 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:00:57.496634 | orchestrator | 2026-04-07 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:00.540890 | orchestrator | 2026-04-07 01:01:00 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:01:00.540941 | orchestrator | 2026-04-07 01:01:00 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:00.540947 | orchestrator | 2026-04-07 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:03.589866 | orchestrator | 2026-04-07 01:01:03 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:01:03.592557 | orchestrator | 2026-04-07 01:01:03 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:03.592602 | orchestrator | 2026-04-07 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:06.637182 | orchestrator | 2026-04-07 01:01:06 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:01:06.638206 | orchestrator | 2026-04-07 01:01:06 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:06.638292 | orchestrator | 2026-04-07 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:09.677314 | orchestrator | 2026-04-07 01:01:09 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state STARTED 2026-04-07 01:01:09.678287 | orchestrator | 2026-04-07 01:01:09 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:09.678706 | orchestrator | 2026-04-07 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:12.743388 | orchestrator | 2026-04-07 01:01:12 | INFO  | Task d8aa664a-5681-4a3e-91ce-78ad6a602d12 is in state SUCCESS 2026-04-07 01:01:12.744515 | orchestrator | 2026-04-07 01:01:12.744607 | orchestrator | 2026-04-07 01:01:12.744624 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-07 01:01:12.744631 | orchestrator | 2026-04-07 01:01:12.744635 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-07 01:01:12.744640 | orchestrator | Tuesday 07 April 2026 00:58:17 +0000 (0:00:00.086) 0:00:00.086 ********* 2026-04-07 01:01:12.744644 | orchestrator | ok: [localhost] => { 2026-04-07 01:01:12.744650 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-07 01:01:12.744655 | orchestrator | } 2026-04-07 01:01:12.744659 | orchestrator | 2026-04-07 01:01:12.744663 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-07 01:01:12.744778 | orchestrator | Tuesday 07 April 2026 00:58:17 +0000 (0:00:00.050) 0:00:00.137 ********* 2026-04-07 01:01:12.744807 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-07 01:01:12.744817 | orchestrator | ...ignoring 2026-04-07 01:01:12.744826 | orchestrator | 2026-04-07 01:01:12.744834 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-07 01:01:12.744841 | orchestrator | Tuesday 07 April 2026 00:58:20 +0000 (0:00:02.794) 0:00:02.931 ********* 2026-04-07 01:01:12.744847 | orchestrator | skipping: [localhost] 2026-04-07 01:01:12.744853 | orchestrator | 2026-04-07 01:01:12.744860 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-07 01:01:12.744866 | orchestrator | Tuesday 07 April 2026 00:58:20 +0000 (0:00:00.048) 0:00:02.980 ********* 2026-04-07 01:01:12.744871 | orchestrator | ok: [localhost] 2026-04-07 01:01:12.744877 | orchestrator | 2026-04-07 01:01:12.744884 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:01:12.744890 | orchestrator | 2026-04-07 01:01:12.744896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:01:12.744903 | orchestrator | Tuesday 07 April 2026 00:58:20 +0000 (0:00:00.191) 0:00:03.172 ********* 2026-04-07 01:01:12.744909 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.744916 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.744922 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.744927 | orchestrator | 2026-04-07 01:01:12.744933 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:01:12.744940 | orchestrator | Tuesday 07 April 2026 00:58:21 +0000 (0:00:00.260) 0:00:03.432 ********* 2026-04-07 01:01:12.744946 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-07 01:01:12.744953 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-07 01:01:12.744967 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-07 01:01:12.744973 | orchestrator | 2026-04-07 01:01:12.744979 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-07 01:01:12.744984 | orchestrator | 2026-04-07 01:01:12.744991 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-07 01:01:12.745010 | orchestrator | Tuesday 07 April 2026 00:58:21 +0000 (0:00:00.510) 0:00:03.942 ********* 2026-04-07 01:01:12.745018 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 01:01:12.745024 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 01:01:12.745028 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 01:01:12.745031 | orchestrator | 2026-04-07 01:01:12.745035 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 01:01:12.745039 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:00.373) 0:00:04.316 ********* 2026-04-07 01:01:12.745043 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:01:12.745049 | orchestrator | 2026-04-07 01:01:12.745053 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-07 01:01:12.745057 | orchestrator | Tuesday 07 April 2026 00:58:22 +0000 (0:00:00.544) 0:00:04.860 ********* 2026-04-07 01:01:12.745077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745110 | orchestrator | 2026-04-07 01:01:12.745118 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-07 01:01:12.745122 | orchestrator | Tuesday 07 April 2026 00:58:26 +0000 (0:00:03.880) 0:00:08.740 ********* 2026-04-07 01:01:12.745126 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745131 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745135 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745139 | orchestrator | 2026-04-07 01:01:12.745142 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-07 01:01:12.745146 | orchestrator | Tuesday 07 April 2026 00:58:27 +0000 (0:00:00.602) 0:00:09.342 ********* 2026-04-07 01:01:12.745150 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745154 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745157 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745161 | orchestrator | 2026-04-07 01:01:12.745165 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-07 01:01:12.745169 | orchestrator | Tuesday 07 April 2026 00:58:28 +0000 (0:00:01.264) 0:00:10.606 ********* 2026-04-07 01:01:12.745176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745195 | orchestrator | 2026-04-07 01:01:12.745202 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-07 01:01:12.745206 | orchestrator | Tuesday 07 April 2026 00:58:31 +0000 (0:00:03.095) 0:00:13.702 ********* 2026-04-07 01:01:12.745209 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745213 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745217 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745221 | orchestrator | 2026-04-07 01:01:12.745224 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-07 01:01:12.745228 | orchestrator | Tuesday 07 April 2026 00:58:32 +0000 (0:00:00.955) 0:00:14.658 ********* 2026-04-07 01:01:12.745232 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:01:12.745236 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745239 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:01:12.745246 | orchestrator | 2026-04-07 01:01:12.745250 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 01:01:12.745278 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:03.986) 0:00:18.644 ********* 2026-04-07 01:01:12.745282 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:01:12.745286 | orchestrator | 2026-04-07 01:01:12.745289 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 01:01:12.745293 | orchestrator | Tuesday 07 April 2026 00:58:36 +0000 (0:00:00.485) 0:00:19.129 ********* 2026-04-07 01:01:12.745302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745307 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745322 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745334 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745338 | orchestrator | 2026-04-07 01:01:12.745341 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 01:01:12.745345 | orchestrator | Tuesday 07 April 2026 00:58:40 +0000 (0:00:03.542) 0:00:22.671 ********* 2026-04-07 01:01:12.745352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745360 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745370 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745381 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745385 | orchestrator | 2026-04-07 01:01:12.745394 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 01:01:12.745398 | orchestrator | Tuesday 07 April 2026 00:58:42 +0000 (0:00:02.349) 0:00:25.020 ********* 2026-04-07 01:01:12.745402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745407 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745425 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 01:01:12.745438 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745442 | orchestrator | 2026-04-07 01:01:12.745447 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-07 01:01:12.745452 | orchestrator | Tuesday 07 April 2026 00:58:46 +0000 (0:00:03.556) 0:00:28.576 ********* 2026-04-07 01:01:12.745461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 01:01:12.745486 | orchestrator | 2026-04-07 01:01:12.745491 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-07 01:01:12.745495 | orchestrator | Tuesday 07 April 2026 00:58:50 +0000 (0:00:03.665) 0:00:32.242 ********* 2026-04-07 01:01:12.745500 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745504 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:01:12.745512 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:01:12.745517 | orchestrator | 2026-04-07 01:01:12.745521 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-07 01:01:12.745525 | orchestrator | Tuesday 07 April 2026 00:58:51 +0000 (0:00:00.944) 0:00:33.186 ********* 2026-04-07 01:01:12.745530 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.745534 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.745539 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.745543 | orchestrator | 2026-04-07 01:01:12.745548 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-07 01:01:12.745553 | orchestrator | Tuesday 07 April 2026 00:58:51 +0000 (0:00:00.459) 0:00:33.646 ********* 2026-04-07 01:01:12.745557 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.745562 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.745566 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.745656 | orchestrator | 2026-04-07 01:01:12.745661 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-07 01:01:12.745666 | orchestrator | Tuesday 07 April 2026 00:58:51 +0000 (0:00:00.421) 0:00:34.068 ********* 2026-04-07 01:01:12.745674 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-07 01:01:12.745679 | orchestrator | ...ignoring 2026-04-07 01:01:12.745719 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-07 01:01:12.745724 | orchestrator | ...ignoring 2026-04-07 01:01:12.745729 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-07 01:01:12.745733 | orchestrator | ...ignoring 2026-04-07 01:01:12.745737 | orchestrator | 2026-04-07 01:01:12.745742 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-07 01:01:12.745747 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:11.174) 0:00:45.242 ********* 2026-04-07 01:01:12.745751 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.745756 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.745760 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.745765 | orchestrator | 2026-04-07 01:01:12.745769 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-07 01:01:12.745774 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:00.465) 0:00:45.708 ********* 2026-04-07 01:01:12.745779 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745783 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745788 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745793 | orchestrator | 2026-04-07 01:01:12.745798 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-07 01:01:12.745802 | orchestrator | Tuesday 07 April 2026 00:59:03 +0000 (0:00:00.406) 0:00:46.114 ********* 2026-04-07 01:01:12.745807 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745811 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745816 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745820 | orchestrator | 2026-04-07 01:01:12.745825 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-07 01:01:12.745829 | orchestrator | Tuesday 07 April 2026 00:59:04 +0000 (0:00:00.438) 0:00:46.553 ********* 2026-04-07 01:01:12.745834 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745838 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745843 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745847 | orchestrator | 2026-04-07 01:01:12.745850 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-07 01:01:12.745854 | orchestrator | Tuesday 07 April 2026 00:59:05 +0000 (0:00:00.655) 0:00:47.208 ********* 2026-04-07 01:01:12.745858 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.745862 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.745866 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.745875 | orchestrator | 2026-04-07 01:01:12.745878 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-07 01:01:12.745882 | orchestrator | Tuesday 07 April 2026 00:59:05 +0000 (0:00:00.451) 0:00:47.660 ********* 2026-04-07 01:01:12.745889 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745908 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745911 | orchestrator | 2026-04-07 01:01:12.745921 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 01:01:12.745925 | orchestrator | Tuesday 07 April 2026 00:59:05 +0000 (0:00:00.421) 0:00:48.081 ********* 2026-04-07 01:01:12.745929 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745933 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745937 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-07 01:01:12.745941 | orchestrator | 2026-04-07 01:01:12.745944 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-07 01:01:12.745948 | orchestrator | Tuesday 07 April 2026 00:59:06 +0000 (0:00:00.336) 0:00:48.417 ********* 2026-04-07 01:01:12.745953 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.745956 | orchestrator | 2026-04-07 01:01:12.745960 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-07 01:01:12.745964 | orchestrator | Tuesday 07 April 2026 00:59:15 +0000 (0:00:09.760) 0:00:58.177 ********* 2026-04-07 01:01:12.745968 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.745972 | orchestrator | 2026-04-07 01:01:12.745976 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 01:01:12.745980 | orchestrator | Tuesday 07 April 2026 00:59:16 +0000 (0:00:00.249) 0:00:58.427 ********* 2026-04-07 01:01:12.745984 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.745987 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.745991 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.745995 | orchestrator | 2026-04-07 01:01:12.745999 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-07 01:01:12.746002 | orchestrator | Tuesday 07 April 2026 00:59:16 +0000 (0:00:00.736) 0:00:59.163 ********* 2026-04-07 01:01:12.746006 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746010 | orchestrator | 2026-04-07 01:01:12.746041 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-07 01:01:12.746046 | orchestrator | Tuesday 07 April 2026 00:59:24 +0000 (0:00:07.809) 0:01:06.973 ********* 2026-04-07 01:01:12.746049 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.746053 | orchestrator | 2026-04-07 01:01:12.746057 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-07 01:01:12.746061 | orchestrator | Tuesday 07 April 2026 00:59:26 +0000 (0:00:01.552) 0:01:08.525 ********* 2026-04-07 01:01:12.746065 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.746069 | orchestrator | 2026-04-07 01:01:12.746072 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-07 01:01:12.746076 | orchestrator | Tuesday 07 April 2026 00:59:29 +0000 (0:00:02.947) 0:01:11.472 ********* 2026-04-07 01:01:12.746081 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746084 | orchestrator | 2026-04-07 01:01:12.746088 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-07 01:01:12.746092 | orchestrator | Tuesday 07 April 2026 00:59:29 +0000 (0:00:00.105) 0:01:11.578 ********* 2026-04-07 01:01:12.746099 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.746103 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746107 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746111 | orchestrator | 2026-04-07 01:01:12.746115 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-07 01:01:12.746119 | orchestrator | Tuesday 07 April 2026 00:59:29 +0000 (0:00:00.389) 0:01:11.967 ********* 2026-04-07 01:01:12.746122 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.746130 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:01:12.746134 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:01:12.746137 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-07 01:01:12.746141 | orchestrator | 2026-04-07 01:01:12.746145 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 01:01:12.746149 | orchestrator | skipping: no hosts matched 2026-04-07 01:01:12.746153 | orchestrator | 2026-04-07 01:01:12.746157 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 01:01:12.746160 | orchestrator | 2026-04-07 01:01:12.746164 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 01:01:12.746168 | orchestrator | Tuesday 07 April 2026 00:59:30 +0000 (0:00:00.345) 0:01:12.313 ********* 2026-04-07 01:01:12.746172 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:01:12.746176 | orchestrator | 2026-04-07 01:01:12.746180 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 01:01:12.746183 | orchestrator | Tuesday 07 April 2026 00:59:47 +0000 (0:00:16.938) 0:01:29.251 ********* 2026-04-07 01:01:12.746187 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.746191 | orchestrator | 2026-04-07 01:01:12.746195 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 01:01:12.746199 | orchestrator | Tuesday 07 April 2026 01:00:02 +0000 (0:00:15.591) 0:01:44.843 ********* 2026-04-07 01:01:12.746203 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.746206 | orchestrator | 2026-04-07 01:01:12.746210 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 01:01:12.746214 | orchestrator | 2026-04-07 01:01:12.746218 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 01:01:12.746222 | orchestrator | Tuesday 07 April 2026 01:00:05 +0000 (0:00:02.602) 0:01:47.446 ********* 2026-04-07 01:01:12.746226 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:01:12.746229 | orchestrator | 2026-04-07 01:01:12.746233 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 01:01:12.746237 | orchestrator | Tuesday 07 April 2026 01:00:22 +0000 (0:00:16.913) 0:02:04.359 ********* 2026-04-07 01:01:12.746241 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.746245 | orchestrator | 2026-04-07 01:01:12.746249 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 01:01:12.746267 | orchestrator | Tuesday 07 April 2026 01:00:37 +0000 (0:00:15.747) 0:02:20.107 ********* 2026-04-07 01:01:12.746271 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.746275 | orchestrator | 2026-04-07 01:01:12.746279 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-07 01:01:12.746283 | orchestrator | 2026-04-07 01:01:12.746291 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 01:01:12.746295 | orchestrator | Tuesday 07 April 2026 01:00:40 +0000 (0:00:02.222) 0:02:22.330 ********* 2026-04-07 01:01:12.746299 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746303 | orchestrator | 2026-04-07 01:01:12.746307 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 01:01:12.746311 | orchestrator | Tuesday 07 April 2026 01:00:56 +0000 (0:00:16.440) 0:02:38.771 ********* 2026-04-07 01:01:12.746314 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.746318 | orchestrator | 2026-04-07 01:01:12.746323 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 01:01:12.746329 | orchestrator | Tuesday 07 April 2026 01:00:57 +0000 (0:00:00.527) 0:02:39.299 ********* 2026-04-07 01:01:12.746335 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.746341 | orchestrator | 2026-04-07 01:01:12.746346 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-07 01:01:12.746352 | orchestrator | 2026-04-07 01:01:12.746357 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-07 01:01:12.746363 | orchestrator | Tuesday 07 April 2026 01:00:59 +0000 (0:00:02.530) 0:02:41.829 ********* 2026-04-07 01:01:12.746369 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:01:12.746379 | orchestrator | 2026-04-07 01:01:12.746385 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-07 01:01:12.746391 | orchestrator | Tuesday 07 April 2026 01:01:00 +0000 (0:00:00.702) 0:02:42.532 ********* 2026-04-07 01:01:12.746397 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746403 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746408 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746414 | orchestrator | 2026-04-07 01:01:12.746420 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-07 01:01:12.746426 | orchestrator | Tuesday 07 April 2026 01:01:02 +0000 (0:00:02.269) 0:02:44.802 ********* 2026-04-07 01:01:12.746432 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746437 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746443 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746449 | orchestrator | 2026-04-07 01:01:12.746455 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-07 01:01:12.746461 | orchestrator | Tuesday 07 April 2026 01:01:04 +0000 (0:00:02.100) 0:02:46.903 ********* 2026-04-07 01:01:12.746467 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746473 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746479 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746485 | orchestrator | 2026-04-07 01:01:12.746489 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-07 01:01:12.746493 | orchestrator | Tuesday 07 April 2026 01:01:06 +0000 (0:00:02.051) 0:02:48.954 ********* 2026-04-07 01:01:12.746497 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746500 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746504 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:01:12.746508 | orchestrator | 2026-04-07 01:01:12.746516 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-07 01:01:12.746520 | orchestrator | Tuesday 07 April 2026 01:01:09 +0000 (0:00:02.262) 0:02:51.217 ********* 2026-04-07 01:01:12.746523 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:01:12.746527 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:01:12.746531 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:01:12.746535 | orchestrator | 2026-04-07 01:01:12.746538 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-07 01:01:12.746542 | orchestrator | Tuesday 07 April 2026 01:01:11 +0000 (0:00:02.727) 0:02:53.944 ********* 2026-04-07 01:01:12.746546 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:01:12.746550 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:01:12.746554 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:01:12.746557 | orchestrator | 2026-04-07 01:01:12.746561 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:01:12.746565 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-07 01:01:12.746569 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-07 01:01:12.746575 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-07 01:01:12.746579 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-07 01:01:12.746583 | orchestrator | 2026-04-07 01:01:12.746586 | orchestrator | 2026-04-07 01:01:12.746590 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:01:12.746595 | orchestrator | Tuesday 07 April 2026 01:01:11 +0000 (0:00:00.215) 0:02:54.160 ********* 2026-04-07 01:01:12.746601 | orchestrator | =============================================================================== 2026-04-07 01:01:12.746616 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.85s 2026-04-07 01:01:12.746625 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.34s 2026-04-07 01:01:12.746631 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.44s 2026-04-07 01:01:12.746637 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.17s 2026-04-07 01:01:12.746643 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.76s 2026-04-07 01:01:12.746649 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.81s 2026-04-07 01:01:12.746660 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.83s 2026-04-07 01:01:12.746665 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.99s 2026-04-07 01:01:12.746671 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.88s 2026-04-07 01:01:12.746676 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.67s 2026-04-07 01:01:12.746682 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.56s 2026-04-07 01:01:12.746688 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.54s 2026-04-07 01:01:12.746694 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.10s 2026-04-07 01:01:12.746700 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.95s 2026-04-07 01:01:12.746706 | orchestrator | Check MariaDB service --------------------------------------------------- 2.79s 2026-04-07 01:01:12.746712 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.73s 2026-04-07 01:01:12.746717 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.53s 2026-04-07 01:01:12.746723 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.35s 2026-04-07 01:01:12.746729 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.27s 2026-04-07 01:01:12.746735 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.26s 2026-04-07 01:01:12.746741 | orchestrator | 2026-04-07 01:01:12 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:12.746748 | orchestrator | 2026-04-07 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:15.813413 | orchestrator | 2026-04-07 01:01:15 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:15.816734 | orchestrator | 2026-04-07 01:01:15 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:15.817380 | orchestrator | 2026-04-07 01:01:15 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:15.817654 | orchestrator | 2026-04-07 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:18.865395 | orchestrator | 2026-04-07 01:01:18 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:18.867760 | orchestrator | 2026-04-07 01:01:18 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:18.869070 | orchestrator | 2026-04-07 01:01:18 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:18.870166 | orchestrator | 2026-04-07 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:21.912969 | orchestrator | 2026-04-07 01:01:21 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:21.914164 | orchestrator | 2026-04-07 01:01:21 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:21.916044 | orchestrator | 2026-04-07 01:01:21 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:21.916102 | orchestrator | 2026-04-07 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:24.954338 | orchestrator | 2026-04-07 01:01:24 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:24.955073 | orchestrator | 2026-04-07 01:01:24 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:24.956214 | orchestrator | 2026-04-07 01:01:24 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:24.956298 | orchestrator | 2026-04-07 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:27.996384 | orchestrator | 2026-04-07 01:01:27 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:27.997057 | orchestrator | 2026-04-07 01:01:27 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:27.998708 | orchestrator | 2026-04-07 01:01:27 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:27.998759 | orchestrator | 2026-04-07 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:31.030405 | orchestrator | 2026-04-07 01:01:31 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:31.031501 | orchestrator | 2026-04-07 01:01:31 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:31.033603 | orchestrator | 2026-04-07 01:01:31 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:31.033671 | orchestrator | 2026-04-07 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:34.069389 | orchestrator | 2026-04-07 01:01:34 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:34.069804 | orchestrator | 2026-04-07 01:01:34 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:34.071245 | orchestrator | 2026-04-07 01:01:34 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:34.071306 | orchestrator | 2026-04-07 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:37.117426 | orchestrator | 2026-04-07 01:01:37 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:37.122876 | orchestrator | 2026-04-07 01:01:37 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:37.124790 | orchestrator | 2026-04-07 01:01:37 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:37.124900 | orchestrator | 2026-04-07 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:40.162327 | orchestrator | 2026-04-07 01:01:40 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:40.164267 | orchestrator | 2026-04-07 01:01:40 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:40.167625 | orchestrator | 2026-04-07 01:01:40 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:40.167689 | orchestrator | 2026-04-07 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:43.210128 | orchestrator | 2026-04-07 01:01:43 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:43.210426 | orchestrator | 2026-04-07 01:01:43 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:43.211413 | orchestrator | 2026-04-07 01:01:43 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:43.211457 | orchestrator | 2026-04-07 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:46.270670 | orchestrator | 2026-04-07 01:01:46 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:46.272519 | orchestrator | 2026-04-07 01:01:46 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:46.274380 | orchestrator | 2026-04-07 01:01:46 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:46.274423 | orchestrator | 2026-04-07 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:49.316585 | orchestrator | 2026-04-07 01:01:49 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:49.318434 | orchestrator | 2026-04-07 01:01:49 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state STARTED 2026-04-07 01:01:49.319776 | orchestrator | 2026-04-07 01:01:49 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:49.319832 | orchestrator | 2026-04-07 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:52.368572 | orchestrator | 2026-04-07 01:01:52 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:52.370600 | orchestrator | 2026-04-07 01:01:52 | INFO  | Task d4a45b67-c983-4697-953b-2cf2e4f55797 is in state SUCCESS 2026-04-07 01:01:52.371836 | orchestrator | 2026-04-07 01:01:52.371910 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 01:01:52.371932 | orchestrator | 2.16.14 2026-04-07 01:01:52.371950 | orchestrator | 2026-04-07 01:01:52.371962 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-07 01:01:52.371976 | orchestrator | 2026-04-07 01:01:52.372107 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 01:01:52.372130 | orchestrator | Tuesday 07 April 2026 01:00:01 +0000 (0:00:00.519) 0:00:00.519 ********* 2026-04-07 01:01:52.372139 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:01:52.372148 | orchestrator | 2026-04-07 01:01:52.372156 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 01:01:52.372442 | orchestrator | Tuesday 07 April 2026 01:00:01 +0000 (0:00:00.557) 0:00:01.077 ********* 2026-04-07 01:01:52.372451 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.372459 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.372467 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.372475 | orchestrator | 2026-04-07 01:01:52.372483 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 01:01:52.372492 | orchestrator | Tuesday 07 April 2026 01:00:02 +0000 (0:00:00.942) 0:00:02.019 ********* 2026-04-07 01:01:52.372500 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.372508 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.372516 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.372524 | orchestrator | 2026-04-07 01:01:52.372532 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 01:01:52.372678 | orchestrator | Tuesday 07 April 2026 01:00:03 +0000 (0:00:00.286) 0:00:02.305 ********* 2026-04-07 01:01:52.372690 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.372717 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.372737 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.372751 | orchestrator | 2026-04-07 01:01:52.372764 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 01:01:52.372840 | orchestrator | Tuesday 07 April 2026 01:00:03 +0000 (0:00:00.822) 0:00:03.128 ********* 2026-04-07 01:01:52.372856 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.372870 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.372884 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.372897 | orchestrator | 2026-04-07 01:01:52.372910 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 01:01:52.372924 | orchestrator | Tuesday 07 April 2026 01:00:04 +0000 (0:00:00.312) 0:00:03.440 ********* 2026-04-07 01:01:52.372962 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.372977 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.372991 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.373005 | orchestrator | 2026-04-07 01:01:52.373019 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 01:01:52.373033 | orchestrator | Tuesday 07 April 2026 01:00:04 +0000 (0:00:00.314) 0:00:03.755 ********* 2026-04-07 01:01:52.373048 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.373062 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.373075 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.373089 | orchestrator | 2026-04-07 01:01:52.373126 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 01:01:52.373142 | orchestrator | Tuesday 07 April 2026 01:00:04 +0000 (0:00:00.308) 0:00:04.063 ********* 2026-04-07 01:01:52.373157 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.373172 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.373185 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.373197 | orchestrator | 2026-04-07 01:01:52.373210 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 01:01:52.373223 | orchestrator | Tuesday 07 April 2026 01:00:05 +0000 (0:00:00.552) 0:00:04.616 ********* 2026-04-07 01:01:52.373236 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.373249 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.373263 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.373293 | orchestrator | 2026-04-07 01:01:52.373309 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 01:01:52.373323 | orchestrator | Tuesday 07 April 2026 01:00:05 +0000 (0:00:00.283) 0:00:04.900 ********* 2026-04-07 01:01:52.373336 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 01:01:52.373350 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 01:01:52.373364 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 01:01:52.373377 | orchestrator | 2026-04-07 01:01:52.373391 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 01:01:52.373405 | orchestrator | Tuesday 07 April 2026 01:00:06 +0000 (0:00:00.659) 0:00:05.560 ********* 2026-04-07 01:01:52.373418 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.373432 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.373446 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.373459 | orchestrator | 2026-04-07 01:01:52.373486 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 01:01:52.373500 | orchestrator | Tuesday 07 April 2026 01:00:06 +0000 (0:00:00.425) 0:00:05.985 ********* 2026-04-07 01:01:52.373515 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 01:01:52.373528 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 01:01:52.373542 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 01:01:52.373554 | orchestrator | 2026-04-07 01:01:52.373568 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 01:01:52.373580 | orchestrator | Tuesday 07 April 2026 01:00:09 +0000 (0:00:03.040) 0:00:09.025 ********* 2026-04-07 01:01:52.373594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 01:01:52.373608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 01:01:52.373621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 01:01:52.373635 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.373649 | orchestrator | 2026-04-07 01:01:52.373717 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 01:01:52.373728 | orchestrator | Tuesday 07 April 2026 01:00:10 +0000 (0:00:00.397) 0:00:09.423 ********* 2026-04-07 01:01:52.373737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373775 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.373783 | orchestrator | 2026-04-07 01:01:52.373791 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 01:01:52.373799 | orchestrator | Tuesday 07 April 2026 01:00:10 +0000 (0:00:00.835) 0:00:10.258 ********* 2026-04-07 01:01:52.373809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.373837 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.373845 | orchestrator | 2026-04-07 01:01:52.373852 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 01:01:52.373860 | orchestrator | Tuesday 07 April 2026 01:00:11 +0000 (0:00:00.163) 0:00:10.421 ********* 2026-04-07 01:01:52.373875 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b7abc630b4d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 01:00:07.682292', 'end': '2026-04-07 01:00:07.735499', 'delta': '0:00:00.053207', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b7abc630b4d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 01:01:52.373891 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e99664ee239b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 01:00:08.778296', 'end': '2026-04-07 01:00:08.799956', 'delta': '0:00:00.021660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e99664ee239b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 01:01:52.373965 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1d4767c6e70d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 01:00:09.607153', 'end': '2026-04-07 01:00:09.639099', 'delta': '0:00:00.031946', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1d4767c6e70d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 01:01:52.373983 | orchestrator | 2026-04-07 01:01:52.373998 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 01:01:52.374053 | orchestrator | Tuesday 07 April 2026 01:00:11 +0000 (0:00:00.394) 0:00:10.816 ********* 2026-04-07 01:01:52.374065 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.374074 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.374081 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.374089 | orchestrator | 2026-04-07 01:01:52.374097 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 01:01:52.374106 | orchestrator | Tuesday 07 April 2026 01:00:11 +0000 (0:00:00.424) 0:00:11.240 ********* 2026-04-07 01:01:52.374114 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-07 01:01:52.374122 | orchestrator | 2026-04-07 01:01:52.374130 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 01:01:52.374137 | orchestrator | Tuesday 07 April 2026 01:00:13 +0000 (0:00:01.210) 0:00:12.451 ********* 2026-04-07 01:01:52.374145 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374153 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374161 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374169 | orchestrator | 2026-04-07 01:01:52.374177 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 01:01:52.374186 | orchestrator | Tuesday 07 April 2026 01:00:13 +0000 (0:00:00.288) 0:00:12.740 ********* 2026-04-07 01:01:52.374194 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374201 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374209 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374217 | orchestrator | 2026-04-07 01:01:52.374225 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 01:01:52.374233 | orchestrator | Tuesday 07 April 2026 01:00:13 +0000 (0:00:00.401) 0:00:13.142 ********* 2026-04-07 01:01:52.374241 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374249 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374257 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374265 | orchestrator | 2026-04-07 01:01:52.374273 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 01:01:52.374316 | orchestrator | Tuesday 07 April 2026 01:00:14 +0000 (0:00:00.481) 0:00:13.623 ********* 2026-04-07 01:01:52.374331 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.374339 | orchestrator | 2026-04-07 01:01:52.374347 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 01:01:52.374355 | orchestrator | Tuesday 07 April 2026 01:00:14 +0000 (0:00:00.136) 0:00:13.760 ********* 2026-04-07 01:01:52.374362 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374370 | orchestrator | 2026-04-07 01:01:52.374378 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 01:01:52.374386 | orchestrator | Tuesday 07 April 2026 01:00:14 +0000 (0:00:00.214) 0:00:13.975 ********* 2026-04-07 01:01:52.374394 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374409 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374417 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374425 | orchestrator | 2026-04-07 01:01:52.374433 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 01:01:52.374441 | orchestrator | Tuesday 07 April 2026 01:00:14 +0000 (0:00:00.268) 0:00:14.244 ********* 2026-04-07 01:01:52.374449 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374457 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374464 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374472 | orchestrator | 2026-04-07 01:01:52.374480 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 01:01:52.374488 | orchestrator | Tuesday 07 April 2026 01:00:15 +0000 (0:00:00.364) 0:00:14.608 ********* 2026-04-07 01:01:52.374496 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374504 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374512 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374520 | orchestrator | 2026-04-07 01:01:52.374533 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 01:01:52.374545 | orchestrator | Tuesday 07 April 2026 01:00:15 +0000 (0:00:00.540) 0:00:15.148 ********* 2026-04-07 01:01:52.374559 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374572 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374585 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374599 | orchestrator | 2026-04-07 01:01:52.374613 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 01:01:52.374627 | orchestrator | Tuesday 07 April 2026 01:00:16 +0000 (0:00:00.316) 0:00:15.464 ********* 2026-04-07 01:01:52.374641 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374651 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374659 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374675 | orchestrator | 2026-04-07 01:01:52.374684 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 01:01:52.374692 | orchestrator | Tuesday 07 April 2026 01:00:16 +0000 (0:00:00.331) 0:00:15.796 ********* 2026-04-07 01:01:52.374700 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374708 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374722 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374778 | orchestrator | 2026-04-07 01:01:52.374795 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 01:01:52.374809 | orchestrator | Tuesday 07 April 2026 01:00:16 +0000 (0:00:00.330) 0:00:16.126 ********* 2026-04-07 01:01:52.374823 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.374837 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.374850 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.374864 | orchestrator | 2026-04-07 01:01:52.374878 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 01:01:52.374891 | orchestrator | Tuesday 07 April 2026 01:00:17 +0000 (0:00:00.489) 0:00:16.616 ********* 2026-04-07 01:01:52.374905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb', 'dm-uuid-LVM-mZEZ9AEcVigBLCVKnQ6kQvuHeb6scNqtafvZSbe2zBaKe5Zscx1bDxau8nTCY3nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.374920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d', 'dm-uuid-LVM-kJxz3LjCmaVw5gnVhd5O9Lq30TLxbGyYnMbiBl81TypAzKu55NRLfqXyqo1atvPN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.374944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.374958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.374971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.374992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WhS6TY-smGD-0vTn-PSrp-JmLa-lOKo-hj7dKO', 'scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706', 'scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c', 'dm-uuid-LVM-RuzjjpGuKLhfgUSO0j9UbYZHMgVcRrMpS6o1eT39eBftYeXGtMpit0E42pIr0kUx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-upoiZR-Zew0-FZ2C-oske-9ezc-Kpbr-uderoV', 'scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4', 'scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db', 'dm-uuid-LVM-VvjF4eKbyQ2OsUFWPqkAeuu8RDIhsJqdSbu69fqEotkdp205IrUnOedu7OwbQzsf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73', 'scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375593 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.375606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zfGsI3-iIvv-uwmH-oqOE-dgq8-Rk0R-VsyNE0', 'scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30', 'scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eh3ESV-U19B-yaNr-BV5N-BXEc-oddH-ucsgyx', 'scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e', 'scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39', 'scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375711 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.375727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0', 'dm-uuid-LVM-F4n5dWigBqQIu532mQIWDLNYgUVJ3BiW6X8R8cxS1h8GruTxaNBrDSP8BCYV40NR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582', 'dm-uuid-LVM-xQmpgel33ejVPKRtIAxG6GhkzWbexzdvAlfpdstTkLoDf6WgX3pw0feGhHV3cgko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 01:01:52.375899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WpWuhZ-s2vi-68wW-8qq6-nf7r-XLSU-nWzndG', 'scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d', 'scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-F7fEoT-KAuk-uLWY-FeSF-tPj5-bvFy-p5511y', 'scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe', 'scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c', 'scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 01:01:52.375997 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.376011 | orchestrator | 2026-04-07 01:01:52.376023 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 01:01:52.376036 | orchestrator | Tuesday 07 April 2026 01:00:17 +0000 (0:00:00.621) 0:00:17.237 ********* 2026-04-07 01:01:52.376050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb', 'dm-uuid-LVM-mZEZ9AEcVigBLCVKnQ6kQvuHeb6scNqtafvZSbe2zBaKe5Zscx1bDxau8nTCY3nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d', 'dm-uuid-LVM-kJxz3LjCmaVw5gnVhd5O9Lq30TLxbGyYnMbiBl81TypAzKu55NRLfqXyqo1atvPN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c', 'dm-uuid-LVM-RuzjjpGuKLhfgUSO0j9UbYZHMgVcRrMpS6o1eT39eBftYeXGtMpit0E42pIr0kUx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db', 'dm-uuid-LVM-VvjF4eKbyQ2OsUFWPqkAeuu8RDIhsJqdSbu69fqEotkdp205IrUnOedu7OwbQzsf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd517331-9c52-419f-93b8-9167504f17a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--68f67d56--373d--5470--8a0c--a7bd578cf9eb-osd--block--68f67d56--373d--5470--8a0c--a7bd578cf9eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WhS6TY-smGD-0vTn-PSrp-JmLa-lOKo-hj7dKO', 'scsi-0QEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706', 'scsi-SQEMU_QEMU_HARDDISK_e2189674-a553-4d5d-8fd8-5508ff437706'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d-osd--block--eae9bbfc--ddf3--58b9--bffe--50f4fd603d5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-upoiZR-Zew0-FZ2C-oske-9ezc-Kpbr-uderoV', 'scsi-0QEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4', 'scsi-SQEMU_QEMU_HARDDISK_3172f6cd-16a6-47ae-9a74-28bff05f52e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73', 'scsi-SQEMU_QEMU_HARDDISK_55495174-9adc-4a3f-978b-4142e2213b73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376472 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.376486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e234e93-4956-44de-aa9e-0c10a0121988-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43d30fb7--a654--5dbf--ba50--28c21932998c-osd--block--43d30fb7--a654--5dbf--ba50--28c21932998c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zfGsI3-iIvv-uwmH-oqOE-dgq8-Rk0R-VsyNE0', 'scsi-0QEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30', 'scsi-SQEMU_QEMU_HARDDISK_fad897de-4fc3-471c-b210-14b98141fe30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376579 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0', 'dm-uuid-LVM-F4n5dWigBqQIu532mQIWDLNYgUVJ3BiW6X8R8cxS1h8GruTxaNBrDSP8BCYV40NR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582', 'dm-uuid-LVM-xQmpgel33ejVPKRtIAxG6GhkzWbexzdvAlfpdstTkLoDf6WgX3pw0feGhHV3cgko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--db8a0de8--f58a--5642--89e2--a8dce5d117db-osd--block--db8a0de8--f58a--5642--89e2--a8dce5d117db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eh3ESV-U19B-yaNr-BV5N-BXEc-oddH-ucsgyx', 'scsi-0QEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e', 'scsi-SQEMU_QEMU_HARDDISK_fa777649-5680-4322-b615-3bf8b4a5ab2e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39', 'scsi-SQEMU_QEMU_HARDDISK_c3ad8b00-5bc8-428f-af67-6bd1265a9b39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376636 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376668 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.376682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3bddeda-068f-4606-ac9b-bb011ef193ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0-osd--block--959bec69--a72e--5ac6--9cdc--b8ec54ca62e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WpWuhZ-s2vi-68wW-8qq6-nf7r-XLSU-nWzndG', 'scsi-0QEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d', 'scsi-SQEMU_QEMU_HARDDISK_01ab1f04-e59c-4d36-99ed-1bd22a22bd9d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376756 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--27d9f8cd--a6eb--5015--929a--744349431582-osd--block--27d9f8cd--a6eb--5015--929a--744349431582'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-F7fEoT-KAuk-uLWY-FeSF-tPj5-bvFy-p5511y', 'scsi-0QEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe', 'scsi-SQEMU_QEMU_HARDDISK_51e4949c-955e-4de9-a772-15b9aebb09fe'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c', 'scsi-SQEMU_QEMU_HARDDISK_89661b50-0f8c-4be3-a02e-39629210b15c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 01:01:52.376797 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.376805 | orchestrator | 2026-04-07 01:01:52.376813 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 01:01:52.376822 | orchestrator | Tuesday 07 April 2026 01:00:18 +0000 (0:00:00.620) 0:00:17.858 ********* 2026-04-07 01:01:52.376830 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.376838 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.376846 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.376854 | orchestrator | 2026-04-07 01:01:52.376862 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 01:01:52.376870 | orchestrator | Tuesday 07 April 2026 01:00:19 +0000 (0:00:00.624) 0:00:18.483 ********* 2026-04-07 01:01:52.376878 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.376886 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.376900 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.376916 | orchestrator | 2026-04-07 01:01:52.376935 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 01:01:52.376948 | orchestrator | Tuesday 07 April 2026 01:00:19 +0000 (0:00:00.486) 0:00:18.970 ********* 2026-04-07 01:01:52.376961 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.376974 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.376987 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.377000 | orchestrator | 2026-04-07 01:01:52.377011 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 01:01:52.377024 | orchestrator | Tuesday 07 April 2026 01:00:20 +0000 (0:00:00.654) 0:00:19.624 ********* 2026-04-07 01:01:52.377038 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377051 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377064 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377078 | orchestrator | 2026-04-07 01:01:52.377091 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 01:01:52.377105 | orchestrator | Tuesday 07 April 2026 01:00:20 +0000 (0:00:00.292) 0:00:19.916 ********* 2026-04-07 01:01:52.377119 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377132 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377145 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377159 | orchestrator | 2026-04-07 01:01:52.377172 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 01:01:52.377194 | orchestrator | Tuesday 07 April 2026 01:00:21 +0000 (0:00:00.382) 0:00:20.299 ********* 2026-04-07 01:01:52.377208 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377222 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377236 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377250 | orchestrator | 2026-04-07 01:01:52.377261 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 01:01:52.377269 | orchestrator | Tuesday 07 April 2026 01:00:21 +0000 (0:00:00.449) 0:00:20.749 ********* 2026-04-07 01:01:52.377417 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 01:01:52.377452 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 01:01:52.377460 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 01:01:52.377468 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 01:01:52.377476 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 01:01:52.377484 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 01:01:52.377492 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 01:01:52.377499 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 01:01:52.377507 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 01:01:52.377515 | orchestrator | 2026-04-07 01:01:52.377523 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 01:01:52.377531 | orchestrator | Tuesday 07 April 2026 01:00:22 +0000 (0:00:00.849) 0:00:21.598 ********* 2026-04-07 01:01:52.377539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 01:01:52.377547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 01:01:52.377555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 01:01:52.377562 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377570 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 01:01:52.377578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 01:01:52.377593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 01:01:52.377601 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377609 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 01:01:52.377617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 01:01:52.377624 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 01:01:52.377632 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377640 | orchestrator | 2026-04-07 01:01:52.377648 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 01:01:52.377656 | orchestrator | Tuesday 07 April 2026 01:00:22 +0000 (0:00:00.322) 0:00:21.920 ********* 2026-04-07 01:01:52.377665 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:01:52.377673 | orchestrator | 2026-04-07 01:01:52.377681 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 01:01:52.377689 | orchestrator | Tuesday 07 April 2026 01:00:23 +0000 (0:00:00.588) 0:00:22.509 ********* 2026-04-07 01:01:52.377707 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377714 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377721 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377727 | orchestrator | 2026-04-07 01:01:52.377734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 01:01:52.377740 | orchestrator | Tuesday 07 April 2026 01:00:23 +0000 (0:00:00.275) 0:00:22.785 ********* 2026-04-07 01:01:52.377747 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377754 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377760 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377774 | orchestrator | 2026-04-07 01:01:52.377780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 01:01:52.377787 | orchestrator | Tuesday 07 April 2026 01:00:23 +0000 (0:00:00.284) 0:00:23.070 ********* 2026-04-07 01:01:52.377794 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377800 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.377807 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:01:52.377813 | orchestrator | 2026-04-07 01:01:52.377820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 01:01:52.377827 | orchestrator | Tuesday 07 April 2026 01:00:24 +0000 (0:00:00.269) 0:00:23.339 ********* 2026-04-07 01:01:52.377833 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.377840 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.377847 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.377854 | orchestrator | 2026-04-07 01:01:52.377875 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 01:01:52.377882 | orchestrator | Tuesday 07 April 2026 01:00:24 +0000 (0:00:00.504) 0:00:23.844 ********* 2026-04-07 01:01:52.377889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 01:01:52.377896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 01:01:52.377902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 01:01:52.377909 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377915 | orchestrator | 2026-04-07 01:01:52.377922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 01:01:52.377929 | orchestrator | Tuesday 07 April 2026 01:00:24 +0000 (0:00:00.334) 0:00:24.179 ********* 2026-04-07 01:01:52.377935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 01:01:52.377942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 01:01:52.377949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 01:01:52.377955 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.377962 | orchestrator | 2026-04-07 01:01:52.377969 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 01:01:52.377975 | orchestrator | Tuesday 07 April 2026 01:00:25 +0000 (0:00:00.346) 0:00:24.525 ********* 2026-04-07 01:01:52.377982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 01:01:52.377989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 01:01:52.377996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 01:01:52.378002 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.378009 | orchestrator | 2026-04-07 01:01:52.378047 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 01:01:52.378054 | orchestrator | Tuesday 07 April 2026 01:00:25 +0000 (0:00:00.341) 0:00:24.866 ********* 2026-04-07 01:01:52.378061 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:01:52.378067 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:01:52.378074 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:01:52.378080 | orchestrator | 2026-04-07 01:01:52.378087 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 01:01:52.378094 | orchestrator | Tuesday 07 April 2026 01:00:25 +0000 (0:00:00.308) 0:00:25.175 ********* 2026-04-07 01:01:52.378101 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 01:01:52.378107 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 01:01:52.378114 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 01:01:52.378121 | orchestrator | 2026-04-07 01:01:52.378128 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 01:01:52.378134 | orchestrator | Tuesday 07 April 2026 01:00:26 +0000 (0:00:00.455) 0:00:25.630 ********* 2026-04-07 01:01:52.378141 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 01:01:52.378148 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 01:01:52.378155 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 01:01:52.378249 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 01:01:52.378265 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 01:01:52.378297 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 01:01:52.378310 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 01:01:52.378321 | orchestrator | 2026-04-07 01:01:52.378331 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 01:01:52.378342 | orchestrator | Tuesday 07 April 2026 01:00:27 +0000 (0:00:00.878) 0:00:26.509 ********* 2026-04-07 01:01:52.378353 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 01:01:52.378372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 01:01:52.378385 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 01:01:52.378397 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 01:01:52.378408 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 01:01:52.378420 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 01:01:52.378440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 01:01:52.378452 | orchestrator | 2026-04-07 01:01:52.378461 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-07 01:01:52.378468 | orchestrator | Tuesday 07 April 2026 01:00:29 +0000 (0:00:01.960) 0:00:28.469 ********* 2026-04-07 01:01:52.378475 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:01:52.378481 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:01:52.378488 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-07 01:01:52.378495 | orchestrator | 2026-04-07 01:01:52.378502 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-07 01:01:52.378508 | orchestrator | Tuesday 07 April 2026 01:00:29 +0000 (0:00:00.363) 0:00:28.833 ********* 2026-04-07 01:01:52.378516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 01:01:52.378524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 01:01:52.378531 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 01:01:52.378538 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 01:01:52.378545 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 01:01:52.378552 | orchestrator | 2026-04-07 01:01:52.378559 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-07 01:01:52.378573 | orchestrator | Tuesday 07 April 2026 01:01:06 +0000 (0:00:36.943) 0:01:05.777 ********* 2026-04-07 01:01:52.378579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378586 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378606 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378619 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-07 01:01:52.378625 | orchestrator | 2026-04-07 01:01:52.378632 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-07 01:01:52.378639 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:17.807) 0:01:23.584 ********* 2026-04-07 01:01:52.378645 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378652 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378665 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378676 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378682 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378689 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 01:01:52.378696 | orchestrator | 2026-04-07 01:01:52.378702 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-07 01:01:52.378709 | orchestrator | Tuesday 07 April 2026 01:01:33 +0000 (0:00:09.555) 0:01:33.139 ********* 2026-04-07 01:01:52.378716 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378722 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378729 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378735 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378742 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378753 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378766 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378773 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378780 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378786 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378793 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378799 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378806 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378813 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378819 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 01:01:52.378826 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 01:01:52.378832 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 01:01:52.378843 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-07 01:01:52.378850 | orchestrator | 2026-04-07 01:01:52.378856 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:01:52.378863 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-07 01:01:52.378871 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-07 01:01:52.378878 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-07 01:01:52.378885 | orchestrator | 2026-04-07 01:01:52.378891 | orchestrator | 2026-04-07 01:01:52.378898 | orchestrator | 2026-04-07 01:01:52.378904 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:01:52.378911 | orchestrator | Tuesday 07 April 2026 01:01:51 +0000 (0:00:17.937) 0:01:51.077 ********* 2026-04-07 01:01:52.378918 | orchestrator | =============================================================================== 2026-04-07 01:01:52.378924 | orchestrator | create openstack pool(s) ----------------------------------------------- 36.94s 2026-04-07 01:01:52.378931 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.94s 2026-04-07 01:01:52.378937 | orchestrator | generate keys ---------------------------------------------------------- 17.81s 2026-04-07 01:01:52.378944 | orchestrator | get keys from monitors -------------------------------------------------- 9.56s 2026-04-07 01:01:52.378951 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.04s 2026-04-07 01:01:52.378957 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.96s 2026-04-07 01:01:52.378964 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.21s 2026-04-07 01:01:52.378970 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.94s 2026-04-07 01:01:52.378977 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.88s 2026-04-07 01:01:52.378983 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-04-07 01:01:52.378990 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2026-04-07 01:01:52.378996 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2026-04-07 01:01:52.379003 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2026-04-07 01:01:52.379010 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-04-07 01:01:52.379016 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-04-07 01:01:52.379023 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2026-04-07 01:01:52.379034 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-04-07 01:01:52.379041 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.59s 2026-04-07 01:01:52.379047 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.56s 2026-04-07 01:01:52.379054 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.55s 2026-04-07 01:01:52.379060 | orchestrator | 2026-04-07 01:01:52 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:52.379067 | orchestrator | 2026-04-07 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:55.417893 | orchestrator | 2026-04-07 01:01:55 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:55.419351 | orchestrator | 2026-04-07 01:01:55 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:01:55.421594 | orchestrator | 2026-04-07 01:01:55 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:55.421782 | orchestrator | 2026-04-07 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:01:58.460436 | orchestrator | 2026-04-07 01:01:58 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:01:58.461121 | orchestrator | 2026-04-07 01:01:58 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:01:58.461880 | orchestrator | 2026-04-07 01:01:58 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:01:58.461921 | orchestrator | 2026-04-07 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:01.505408 | orchestrator | 2026-04-07 01:02:01 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:01.508402 | orchestrator | 2026-04-07 01:02:01 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:01.509939 | orchestrator | 2026-04-07 01:02:01 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:01.509985 | orchestrator | 2026-04-07 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:04.551835 | orchestrator | 2026-04-07 01:02:04 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:04.554342 | orchestrator | 2026-04-07 01:02:04 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:04.557401 | orchestrator | 2026-04-07 01:02:04 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:04.557606 | orchestrator | 2026-04-07 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:07.612387 | orchestrator | 2026-04-07 01:02:07 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:07.614347 | orchestrator | 2026-04-07 01:02:07 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:07.615935 | orchestrator | 2026-04-07 01:02:07 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:07.615987 | orchestrator | 2026-04-07 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:10.667725 | orchestrator | 2026-04-07 01:02:10 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:10.671778 | orchestrator | 2026-04-07 01:02:10 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:10.675690 | orchestrator | 2026-04-07 01:02:10 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:10.675809 | orchestrator | 2026-04-07 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:13.729260 | orchestrator | 2026-04-07 01:02:13 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:13.730453 | orchestrator | 2026-04-07 01:02:13 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:13.732163 | orchestrator | 2026-04-07 01:02:13 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:13.732215 | orchestrator | 2026-04-07 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:16.775383 | orchestrator | 2026-04-07 01:02:16 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:16.775882 | orchestrator | 2026-04-07 01:02:16 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:16.777065 | orchestrator | 2026-04-07 01:02:16 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:16.778976 | orchestrator | 2026-04-07 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:19.827205 | orchestrator | 2026-04-07 01:02:19 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:19.827763 | orchestrator | 2026-04-07 01:02:19 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:19.830140 | orchestrator | 2026-04-07 01:02:19 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:19.830193 | orchestrator | 2026-04-07 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:22.877404 | orchestrator | 2026-04-07 01:02:22 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:22.880164 | orchestrator | 2026-04-07 01:02:22 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:22.882795 | orchestrator | 2026-04-07 01:02:22 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:22.882865 | orchestrator | 2026-04-07 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:25.932077 | orchestrator | 2026-04-07 01:02:25 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:25.934537 | orchestrator | 2026-04-07 01:02:25 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:25.935586 | orchestrator | 2026-04-07 01:02:25 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:25.935634 | orchestrator | 2026-04-07 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:28.985395 | orchestrator | 2026-04-07 01:02:28 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:28.986902 | orchestrator | 2026-04-07 01:02:28 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state STARTED 2026-04-07 01:02:28.988684 | orchestrator | 2026-04-07 01:02:28 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:28.988788 | orchestrator | 2026-04-07 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:32.042581 | orchestrator | 2026-04-07 01:02:32 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:32.042923 | orchestrator | 2026-04-07 01:02:32 | INFO  | Task 3f237cbf-76ff-4f5f-9e9f-80f1616e8d7c is in state SUCCESS 2026-04-07 01:02:32.045699 | orchestrator | 2026-04-07 01:02:32 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:32.047196 | orchestrator | 2026-04-07 01:02:32 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:32.047254 | orchestrator | 2026-04-07 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:35.085146 | orchestrator | 2026-04-07 01:02:35 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:35.088517 | orchestrator | 2026-04-07 01:02:35 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:35.089177 | orchestrator | 2026-04-07 01:02:35 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:35.089201 | orchestrator | 2026-04-07 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:38.135210 | orchestrator | 2026-04-07 01:02:38 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:38.137668 | orchestrator | 2026-04-07 01:02:38 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:38.139160 | orchestrator | 2026-04-07 01:02:38 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:38.139243 | orchestrator | 2026-04-07 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:41.182868 | orchestrator | 2026-04-07 01:02:41 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:41.184554 | orchestrator | 2026-04-07 01:02:41 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:41.185702 | orchestrator | 2026-04-07 01:02:41 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:41.185731 | orchestrator | 2026-04-07 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:44.223672 | orchestrator | 2026-04-07 01:02:44 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:44.224120 | orchestrator | 2026-04-07 01:02:44 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:44.225530 | orchestrator | 2026-04-07 01:02:44 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:44.225572 | orchestrator | 2026-04-07 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:47.269754 | orchestrator | 2026-04-07 01:02:47 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state STARTED 2026-04-07 01:02:47.270783 | orchestrator | 2026-04-07 01:02:47 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:47.271655 | orchestrator | 2026-04-07 01:02:47 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:47.271792 | orchestrator | 2026-04-07 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:50.321371 | orchestrator | 2026-04-07 01:02:50 | INFO  | Task dff71051-19a1-492b-8df2-d1934773f6a1 is in state SUCCESS 2026-04-07 01:02:50.323757 | orchestrator | 2026-04-07 01:02:50.323841 | orchestrator | 2026-04-07 01:02:50.323853 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-07 01:02:50.323861 | orchestrator | 2026-04-07 01:02:50.323867 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-07 01:02:50.323875 | orchestrator | Tuesday 07 April 2026 01:01:55 +0000 (0:00:00.248) 0:00:00.248 ********* 2026-04-07 01:02:50.323881 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-07 01:02:50.323891 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323895 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323903 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:02:50.323908 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323912 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-07 01:02:50.323917 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-07 01:02:50.323921 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-07 01:02:50.323925 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-07 01:02:50.323929 | orchestrator | 2026-04-07 01:02:50.323932 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-07 01:02:50.323937 | orchestrator | Tuesday 07 April 2026 01:01:59 +0000 (0:00:04.170) 0:00:04.419 ********* 2026-04-07 01:02:50.323941 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-07 01:02:50.323944 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323969 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323975 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:02:50.323981 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.323988 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-07 01:02:50.323997 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-07 01:02:50.324004 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-07 01:02:50.324009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-07 01:02:50.324015 | orchestrator | 2026-04-07 01:02:50.324021 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-07 01:02:50.324027 | orchestrator | Tuesday 07 April 2026 01:02:03 +0000 (0:00:03.861) 0:00:08.280 ********* 2026-04-07 01:02:50.324034 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 01:02:50.324039 | orchestrator | 2026-04-07 01:02:50.324045 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-07 01:02:50.324050 | orchestrator | Tuesday 07 April 2026 01:02:04 +0000 (0:00:01.051) 0:00:09.331 ********* 2026-04-07 01:02:50.324057 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-07 01:02:50.324064 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324071 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324078 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:02:50.324085 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324090 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-07 01:02:50.324097 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-07 01:02:50.324117 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-07 01:02:50.324124 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-07 01:02:50.324130 | orchestrator | 2026-04-07 01:02:50.324136 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-07 01:02:50.324143 | orchestrator | Tuesday 07 April 2026 01:02:18 +0000 (0:00:14.100) 0:00:23.432 ********* 2026-04-07 01:02:50.324150 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-07 01:02:50.324156 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-07 01:02:50.324164 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-07 01:02:50.324170 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-07 01:02:50.324192 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-07 01:02:50.324199 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-07 01:02:50.324205 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-07 01:02:50.324211 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-07 01:02:50.324216 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-07 01:02:50.324231 | orchestrator | 2026-04-07 01:02:50.324274 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-07 01:02:50.324283 | orchestrator | Tuesday 07 April 2026 01:02:22 +0000 (0:00:03.264) 0:00:26.696 ********* 2026-04-07 01:02:50.324292 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-07 01:02:50.324299 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324324 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324330 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:02:50.324336 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 01:02:50.324342 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-07 01:02:50.324360 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-07 01:02:50.324367 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-07 01:02:50.324374 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-07 01:02:50.324380 | orchestrator | 2026-04-07 01:02:50.324386 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:02:50.324392 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:02:50.324400 | orchestrator | 2026-04-07 01:02:50.324406 | orchestrator | 2026-04-07 01:02:50.324412 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:02:50.324418 | orchestrator | Tuesday 07 April 2026 01:02:29 +0000 (0:00:07.071) 0:00:33.768 ********* 2026-04-07 01:02:50.324425 | orchestrator | =============================================================================== 2026-04-07 01:02:50.324431 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.10s 2026-04-07 01:02:50.324674 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.07s 2026-04-07 01:02:50.324690 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.17s 2026-04-07 01:02:50.324696 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.86s 2026-04-07 01:02:50.324702 | orchestrator | Check if target directories exist --------------------------------------- 3.26s 2026-04-07 01:02:50.324708 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-04-07 01:02:50.324714 | orchestrator | 2026-04-07 01:02:50.324719 | orchestrator | 2026-04-07 01:02:50.324724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:02:50.324730 | orchestrator | 2026-04-07 01:02:50.324736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:02:50.324741 | orchestrator | Tuesday 07 April 2026 01:01:15 +0000 (0:00:00.318) 0:00:00.318 ********* 2026-04-07 01:02:50.324747 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.324754 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.324760 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.324766 | orchestrator | 2026-04-07 01:02:50.324772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:02:50.324778 | orchestrator | Tuesday 07 April 2026 01:01:15 +0000 (0:00:00.274) 0:00:00.592 ********* 2026-04-07 01:02:50.324783 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-07 01:02:50.324790 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-07 01:02:50.324796 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-07 01:02:50.324801 | orchestrator | 2026-04-07 01:02:50.324805 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-07 01:02:50.324809 | orchestrator | 2026-04-07 01:02:50.324813 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 01:02:50.324817 | orchestrator | Tuesday 07 April 2026 01:01:16 +0000 (0:00:00.291) 0:00:00.884 ********* 2026-04-07 01:02:50.324829 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:02:50.324834 | orchestrator | 2026-04-07 01:02:50.324856 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-07 01:02:50.324860 | orchestrator | Tuesday 07 April 2026 01:01:16 +0000 (0:00:00.587) 0:00:01.471 ********* 2026-04-07 01:02:50.324878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.324889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.324904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.324908 | orchestrator | 2026-04-07 01:02:50.324914 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-07 01:02:50.324921 | orchestrator | Tuesday 07 April 2026 01:01:18 +0000 (0:00:01.439) 0:00:02.910 ********* 2026-04-07 01:02:50.324927 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.324933 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.324938 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.324946 | orchestrator | 2026-04-07 01:02:50.324950 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 01:02:50.324953 | orchestrator | Tuesday 07 April 2026 01:01:18 +0000 (0:00:00.269) 0:00:03.179 ********* 2026-04-07 01:02:50.324957 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 01:02:50.324961 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 01:02:50.324965 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 01:02:50.324969 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 01:02:50.324973 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 01:02:50.324991 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 01:02:50.324997 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-07 01:02:50.325003 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 01:02:50.325008 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 01:02:50.325014 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 01:02:50.325020 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 01:02:50.325026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 01:02:50.325032 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 01:02:50.325038 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 01:02:50.325044 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-07 01:02:50.325051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 01:02:50.325055 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 01:02:50.325059 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 01:02:50.325063 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 01:02:50.325069 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 01:02:50.325075 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 01:02:50.325080 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 01:02:50.325093 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-07 01:02:50.325101 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 01:02:50.325108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-07 01:02:50.325116 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-07 01:02:50.325122 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-07 01:02:50.325127 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-07 01:02:50.325133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-07 01:02:50.325138 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-07 01:02:50.325143 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-07 01:02:50.325148 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-07 01:02:50.325154 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-07 01:02:50.325162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-07 01:02:50.325174 | orchestrator | 2026-04-07 01:02:50.325180 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325186 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.697) 0:00:03.877 ********* 2026-04-07 01:02:50.325191 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325200 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325208 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325213 | orchestrator | 2026-04-07 01:02:50.325344 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325351 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.457) 0:00:04.334 ********* 2026-04-07 01:02:50.325357 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325363 | orchestrator | 2026-04-07 01:02:50.325369 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325376 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.136) 0:00:04.471 ********* 2026-04-07 01:02:50.325382 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325387 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325398 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325403 | orchestrator | 2026-04-07 01:02:50.325485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325508 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.314) 0:00:04.785 ********* 2026-04-07 01:02:50.325514 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325520 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325526 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325532 | orchestrator | 2026-04-07 01:02:50.325538 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325544 | orchestrator | Tuesday 07 April 2026 01:01:20 +0000 (0:00:00.284) 0:00:05.069 ********* 2026-04-07 01:02:50.325550 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325555 | orchestrator | 2026-04-07 01:02:50.325561 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325579 | orchestrator | Tuesday 07 April 2026 01:01:20 +0000 (0:00:00.133) 0:00:05.203 ********* 2026-04-07 01:02:50.325586 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325592 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325598 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325604 | orchestrator | 2026-04-07 01:02:50.325613 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325619 | orchestrator | Tuesday 07 April 2026 01:01:20 +0000 (0:00:00.434) 0:00:05.637 ********* 2026-04-07 01:02:50.325625 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325631 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325637 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325643 | orchestrator | 2026-04-07 01:02:50.325648 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325654 | orchestrator | Tuesday 07 April 2026 01:01:21 +0000 (0:00:00.292) 0:00:05.930 ********* 2026-04-07 01:02:50.325660 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325666 | orchestrator | 2026-04-07 01:02:50.325671 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325677 | orchestrator | Tuesday 07 April 2026 01:01:21 +0000 (0:00:00.108) 0:00:06.039 ********* 2026-04-07 01:02:50.325682 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325687 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325692 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325699 | orchestrator | 2026-04-07 01:02:50.325704 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325719 | orchestrator | Tuesday 07 April 2026 01:01:21 +0000 (0:00:00.269) 0:00:06.308 ********* 2026-04-07 01:02:50.325725 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325730 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325736 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325742 | orchestrator | 2026-04-07 01:02:50.325754 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325760 | orchestrator | Tuesday 07 April 2026 01:01:21 +0000 (0:00:00.295) 0:00:06.604 ********* 2026-04-07 01:02:50.325765 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325770 | orchestrator | 2026-04-07 01:02:50.325775 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325781 | orchestrator | Tuesday 07 April 2026 01:01:21 +0000 (0:00:00.115) 0:00:06.720 ********* 2026-04-07 01:02:50.325787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325793 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325798 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325804 | orchestrator | 2026-04-07 01:02:50.325809 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325815 | orchestrator | Tuesday 07 April 2026 01:01:22 +0000 (0:00:00.444) 0:00:07.164 ********* 2026-04-07 01:02:50.325821 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325826 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325832 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325837 | orchestrator | 2026-04-07 01:02:50.325842 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325849 | orchestrator | Tuesday 07 April 2026 01:01:22 +0000 (0:00:00.300) 0:00:07.465 ********* 2026-04-07 01:02:50.325854 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325859 | orchestrator | 2026-04-07 01:02:50.325865 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325870 | orchestrator | Tuesday 07 April 2026 01:01:22 +0000 (0:00:00.120) 0:00:07.585 ********* 2026-04-07 01:02:50.325876 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325882 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325888 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325894 | orchestrator | 2026-04-07 01:02:50.325899 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.325904 | orchestrator | Tuesday 07 April 2026 01:01:23 +0000 (0:00:00.270) 0:00:07.856 ********* 2026-04-07 01:02:50.325910 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.325916 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.325921 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.325926 | orchestrator | 2026-04-07 01:02:50.325931 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.325937 | orchestrator | Tuesday 07 April 2026 01:01:23 +0000 (0:00:00.459) 0:00:08.316 ********* 2026-04-07 01:02:50.325952 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325959 | orchestrator | 2026-04-07 01:02:50.325964 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.325970 | orchestrator | Tuesday 07 April 2026 01:01:23 +0000 (0:00:00.139) 0:00:08.455 ********* 2026-04-07 01:02:50.325975 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.325980 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.325986 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.325992 | orchestrator | 2026-04-07 01:02:50.325998 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.326003 | orchestrator | Tuesday 07 April 2026 01:01:23 +0000 (0:00:00.309) 0:00:08.764 ********* 2026-04-07 01:02:50.326009 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.326083 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.326092 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.326098 | orchestrator | 2026-04-07 01:02:50.326103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.326110 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:00.331) 0:00:09.096 ********* 2026-04-07 01:02:50.326117 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326123 | orchestrator | 2026-04-07 01:02:50.326130 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.326137 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:00.171) 0:00:09.268 ********* 2026-04-07 01:02:50.326148 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326153 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326157 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326162 | orchestrator | 2026-04-07 01:02:50.326167 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.326172 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:00.266) 0:00:09.534 ********* 2026-04-07 01:02:50.326178 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.326184 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.326190 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.326195 | orchestrator | 2026-04-07 01:02:50.326200 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.326206 | orchestrator | Tuesday 07 April 2026 01:01:25 +0000 (0:00:00.523) 0:00:10.058 ********* 2026-04-07 01:02:50.326211 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326216 | orchestrator | 2026-04-07 01:02:50.326226 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.326232 | orchestrator | Tuesday 07 April 2026 01:01:25 +0000 (0:00:00.117) 0:00:10.175 ********* 2026-04-07 01:02:50.326237 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326243 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326248 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326253 | orchestrator | 2026-04-07 01:02:50.326259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.326264 | orchestrator | Tuesday 07 April 2026 01:01:25 +0000 (0:00:00.263) 0:00:10.439 ********* 2026-04-07 01:02:50.326270 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.326275 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.326281 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.326288 | orchestrator | 2026-04-07 01:02:50.326295 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.326301 | orchestrator | Tuesday 07 April 2026 01:01:25 +0000 (0:00:00.324) 0:00:10.764 ********* 2026-04-07 01:02:50.326348 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326354 | orchestrator | 2026-04-07 01:02:50.326367 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.326373 | orchestrator | Tuesday 07 April 2026 01:01:26 +0000 (0:00:00.117) 0:00:10.881 ********* 2026-04-07 01:02:50.326380 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326385 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326391 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326397 | orchestrator | 2026-04-07 01:02:50.326402 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 01:02:50.326407 | orchestrator | Tuesday 07 April 2026 01:01:26 +0000 (0:00:00.302) 0:00:11.184 ********* 2026-04-07 01:02:50.326413 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:02:50.326419 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:02:50.326424 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:02:50.326429 | orchestrator | 2026-04-07 01:02:50.326435 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 01:02:50.326441 | orchestrator | Tuesday 07 April 2026 01:01:26 +0000 (0:00:00.486) 0:00:11.671 ********* 2026-04-07 01:02:50.326447 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326451 | orchestrator | 2026-04-07 01:02:50.326455 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 01:02:50.326459 | orchestrator | Tuesday 07 April 2026 01:01:26 +0000 (0:00:00.122) 0:00:11.794 ********* 2026-04-07 01:02:50.326463 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326468 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326474 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326482 | orchestrator | 2026-04-07 01:02:50.326489 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-07 01:02:50.326496 | orchestrator | Tuesday 07 April 2026 01:01:27 +0000 (0:00:00.286) 0:00:12.080 ********* 2026-04-07 01:02:50.326509 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:02:50.326515 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:02:50.326520 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:02:50.326525 | orchestrator | 2026-04-07 01:02:50.326531 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-07 01:02:50.326536 | orchestrator | Tuesday 07 April 2026 01:01:28 +0000 (0:00:01.668) 0:00:13.749 ********* 2026-04-07 01:02:50.326543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 01:02:50.326549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 01:02:50.326555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 01:02:50.326561 | orchestrator | 2026-04-07 01:02:50.326567 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-07 01:02:50.326573 | orchestrator | Tuesday 07 April 2026 01:01:31 +0000 (0:00:02.660) 0:00:16.409 ********* 2026-04-07 01:02:50.326579 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 01:02:50.326597 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 01:02:50.326604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 01:02:50.326609 | orchestrator | 2026-04-07 01:02:50.326616 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-07 01:02:50.326629 | orchestrator | Tuesday 07 April 2026 01:01:33 +0000 (0:00:01.954) 0:00:18.364 ********* 2026-04-07 01:02:50.326635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 01:02:50.326642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 01:02:50.326648 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 01:02:50.326654 | orchestrator | 2026-04-07 01:02:50.326660 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-07 01:02:50.326666 | orchestrator | Tuesday 07 April 2026 01:01:35 +0000 (0:00:01.622) 0:00:19.987 ********* 2026-04-07 01:02:50.326673 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326680 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326687 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326694 | orchestrator | 2026-04-07 01:02:50.326701 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-07 01:02:50.326707 | orchestrator | Tuesday 07 April 2026 01:01:35 +0000 (0:00:00.290) 0:00:20.278 ********* 2026-04-07 01:02:50.326715 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326722 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326729 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326736 | orchestrator | 2026-04-07 01:02:50.326743 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 01:02:50.326755 | orchestrator | Tuesday 07 April 2026 01:01:35 +0000 (0:00:00.255) 0:00:20.534 ********* 2026-04-07 01:02:50.326762 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:02:50.326768 | orchestrator | 2026-04-07 01:02:50.326774 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-07 01:02:50.326780 | orchestrator | Tuesday 07 April 2026 01:01:36 +0000 (0:00:00.751) 0:00:21.285 ********* 2026-04-07 01:02:50.326802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.326823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.326839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.326851 | orchestrator | 2026-04-07 01:02:50.326858 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-07 01:02:50.326864 | orchestrator | Tuesday 07 April 2026 01:01:38 +0000 (0:00:01.550) 0:00:22.835 ********* 2026-04-07 01:02:50.326880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.326893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.326901 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.326907 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.326927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.326940 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.326948 | orchestrator | 2026-04-07 01:02:50.326967 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-07 01:02:50.326974 | orchestrator | Tuesday 07 April 2026 01:01:38 +0000 (0:00:00.915) 0:00:23.751 ********* 2026-04-07 01:02:50.326982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.326989 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.327010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.327021 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.327026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 01:02:50.327030 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.327034 | orchestrator | 2026-04-07 01:02:50.327038 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-07 01:02:50.327042 | orchestrator | Tuesday 07 April 2026 01:01:39 +0000 (0:00:01.009) 0:00:24.760 ********* 2026-04-07 01:02:50.327056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.327075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.327094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 01:02:50.327101 | orchestrator | 2026-04-07 01:02:50.327108 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 01:02:50.327114 | orchestrator | Tuesday 07 April 2026 01:01:41 +0000 (0:00:01.380) 0:00:26.141 ********* 2026-04-07 01:02:50.327118 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:02:50.327122 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:02:50.327129 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:02:50.327135 | orchestrator | 2026-04-07 01:02:50.327142 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 01:02:50.327147 | orchestrator | Tuesday 07 April 2026 01:01:41 +0000 (0:00:00.262) 0:00:26.403 ********* 2026-04-07 01:02:50.327154 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:02:50.327160 | orchestrator | 2026-04-07 01:02:50.327166 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-07 01:02:50.327172 | orchestrator | Tuesday 07 April 2026 01:01:42 +0000 (0:00:00.708) 0:00:27.112 ********* 2026-04-07 01:02:50.327178 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:02:50.327184 | orchestrator | 2026-04-07 01:02:50.327190 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-07 01:02:50.327197 | orchestrator | Tuesday 07 April 2026 01:01:44 +0000 (0:00:02.121) 0:00:29.233 ********* 2026-04-07 01:02:50.327204 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:02:50.327210 | orchestrator | 2026-04-07 01:02:50.327216 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-07 01:02:50.327222 | orchestrator | Tuesday 07 April 2026 01:01:46 +0000 (0:00:02.170) 0:00:31.403 ********* 2026-04-07 01:02:50.327234 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:02:50.327241 | orchestrator | 2026-04-07 01:02:50.327246 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 01:02:50.327252 | orchestrator | Tuesday 07 April 2026 01:02:01 +0000 (0:00:14.635) 0:00:46.039 ********* 2026-04-07 01:02:50.327258 | orchestrator | 2026-04-07 01:02:50.327265 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 01:02:50.327271 | orchestrator | Tuesday 07 April 2026 01:02:01 +0000 (0:00:00.063) 0:00:46.102 ********* 2026-04-07 01:02:50.327278 | orchestrator | 2026-04-07 01:02:50.327294 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 01:02:50.327298 | orchestrator | Tuesday 07 April 2026 01:02:01 +0000 (0:00:00.064) 0:00:46.167 ********* 2026-04-07 01:02:50.327302 | orchestrator | 2026-04-07 01:02:50.327387 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-07 01:02:50.327403 | orchestrator | Tuesday 07 April 2026 01:02:01 +0000 (0:00:00.081) 0:00:46.249 ********* 2026-04-07 01:02:50.327409 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:02:50.327415 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:02:50.327421 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:02:50.327427 | orchestrator | 2026-04-07 01:02:50.327433 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:02:50.327439 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 01:02:50.327447 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-07 01:02:50.327452 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-07 01:02:50.327458 | orchestrator | 2026-04-07 01:02:50.327463 | orchestrator | 2026-04-07 01:02:50.327476 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:02:50.327482 | orchestrator | Tuesday 07 April 2026 01:02:49 +0000 (0:00:48.579) 0:01:34.829 ********* 2026-04-07 01:02:50.327495 | orchestrator | =============================================================================== 2026-04-07 01:02:50.327501 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.58s 2026-04-07 01:02:50.327507 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.64s 2026-04-07 01:02:50.327512 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.66s 2026-04-07 01:02:50.327518 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.17s 2026-04-07 01:02:50.327524 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.12s 2026-04-07 01:02:50.327529 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.95s 2026-04-07 01:02:50.327535 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2026-04-07 01:02:50.327543 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.62s 2026-04-07 01:02:50.327547 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2026-04-07 01:02:50.327551 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.44s 2026-04-07 01:02:50.327555 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.38s 2026-04-07 01:02:50.327559 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.01s 2026-04-07 01:02:50.327562 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.92s 2026-04-07 01:02:50.327566 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-04-07 01:02:50.327579 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-04-07 01:02:50.327583 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-04-07 01:02:50.327598 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-04-07 01:02:50.327602 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-04-07 01:02:50.327606 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-04-07 01:02:50.327610 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2026-04-07 01:02:50.327614 | orchestrator | 2026-04-07 01:02:50 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:50.328481 | orchestrator | 2026-04-07 01:02:50 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:50.328534 | orchestrator | 2026-04-07 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:53.362843 | orchestrator | 2026-04-07 01:02:53 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:53.363574 | orchestrator | 2026-04-07 01:02:53 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:53.363606 | orchestrator | 2026-04-07 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:56.403100 | orchestrator | 2026-04-07 01:02:56 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:56.405134 | orchestrator | 2026-04-07 01:02:56 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:56.405185 | orchestrator | 2026-04-07 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:02:59.446630 | orchestrator | 2026-04-07 01:02:59 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:02:59.447653 | orchestrator | 2026-04-07 01:02:59 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:02:59.447687 | orchestrator | 2026-04-07 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:02.491082 | orchestrator | 2026-04-07 01:03:02 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:02.493459 | orchestrator | 2026-04-07 01:03:02 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:02.493508 | orchestrator | 2026-04-07 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:05.542174 | orchestrator | 2026-04-07 01:03:05 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:05.543833 | orchestrator | 2026-04-07 01:03:05 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:05.543903 | orchestrator | 2026-04-07 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:08.584464 | orchestrator | 2026-04-07 01:03:08 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:08.587366 | orchestrator | 2026-04-07 01:03:08 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:08.587458 | orchestrator | 2026-04-07 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:11.629428 | orchestrator | 2026-04-07 01:03:11 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:11.630733 | orchestrator | 2026-04-07 01:03:11 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:11.630822 | orchestrator | 2026-04-07 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:14.673512 | orchestrator | 2026-04-07 01:03:14 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:14.676307 | orchestrator | 2026-04-07 01:03:14 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:14.676407 | orchestrator | 2026-04-07 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:17.721485 | orchestrator | 2026-04-07 01:03:17 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:17.723066 | orchestrator | 2026-04-07 01:03:17 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:17.723108 | orchestrator | 2026-04-07 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:20.773433 | orchestrator | 2026-04-07 01:03:20 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:20.775610 | orchestrator | 2026-04-07 01:03:20 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:20.775679 | orchestrator | 2026-04-07 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:23.827200 | orchestrator | 2026-04-07 01:03:23 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:23.828858 | orchestrator | 2026-04-07 01:03:23 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state STARTED 2026-04-07 01:03:23.828904 | orchestrator | 2026-04-07 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:26.876984 | orchestrator | 2026-04-07 01:03:26 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:26.880237 | orchestrator | 2026-04-07 01:03:26 | INFO  | Task 2b835e3b-6a3a-4f52-bdc0-7ffb58f6c416 is in state SUCCESS 2026-04-07 01:03:26.880839 | orchestrator | 2026-04-07 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:29.943860 | orchestrator | 2026-04-07 01:03:29 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:29.946055 | orchestrator | 2026-04-07 01:03:29 | INFO  | Task 6382583a-222f-4c6a-82bf-b80626657250 is in state STARTED 2026-04-07 01:03:29.949955 | orchestrator | 2026-04-07 01:03:29 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:29.950961 | orchestrator | 2026-04-07 01:03:29 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:29.951012 | orchestrator | 2026-04-07 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:32.988044 | orchestrator | 2026-04-07 01:03:32 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:32.988745 | orchestrator | 2026-04-07 01:03:32 | INFO  | Task 6382583a-222f-4c6a-82bf-b80626657250 is in state SUCCESS 2026-04-07 01:03:32.989848 | orchestrator | 2026-04-07 01:03:32 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:32.990678 | orchestrator | 2026-04-07 01:03:32 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:32.990831 | orchestrator | 2026-04-07 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:36.174231 | orchestrator | 2026-04-07 01:03:36 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:36.174282 | orchestrator | 2026-04-07 01:03:36 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:36.174291 | orchestrator | 2026-04-07 01:03:36 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:36.174302 | orchestrator | 2026-04-07 01:03:36 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:36.174308 | orchestrator | 2026-04-07 01:03:36 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:36.174314 | orchestrator | 2026-04-07 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:39.226264 | orchestrator | 2026-04-07 01:03:39 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:39.226314 | orchestrator | 2026-04-07 01:03:39 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:39.226320 | orchestrator | 2026-04-07 01:03:39 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:39.226324 | orchestrator | 2026-04-07 01:03:39 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:39.226328 | orchestrator | 2026-04-07 01:03:39 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:39.226358 | orchestrator | 2026-04-07 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:42.224630 | orchestrator | 2026-04-07 01:03:42 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:42.224690 | orchestrator | 2026-04-07 01:03:42 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:42.224698 | orchestrator | 2026-04-07 01:03:42 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:42.224823 | orchestrator | 2026-04-07 01:03:42 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:42.224833 | orchestrator | 2026-04-07 01:03:42 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:42.224839 | orchestrator | 2026-04-07 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:45.266592 | orchestrator | 2026-04-07 01:03:45 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:45.267212 | orchestrator | 2026-04-07 01:03:45 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:45.268179 | orchestrator | 2026-04-07 01:03:45 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:45.268720 | orchestrator | 2026-04-07 01:03:45 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:45.269102 | orchestrator | 2026-04-07 01:03:45 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:45.269477 | orchestrator | 2026-04-07 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:48.303121 | orchestrator | 2026-04-07 01:03:48 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:48.304132 | orchestrator | 2026-04-07 01:03:48 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:48.304795 | orchestrator | 2026-04-07 01:03:48 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:48.306142 | orchestrator | 2026-04-07 01:03:48 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:48.308150 | orchestrator | 2026-04-07 01:03:48 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:48.308188 | orchestrator | 2026-04-07 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:51.338219 | orchestrator | 2026-04-07 01:03:51 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:51.338482 | orchestrator | 2026-04-07 01:03:51 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:51.339219 | orchestrator | 2026-04-07 01:03:51 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:51.339833 | orchestrator | 2026-04-07 01:03:51 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:51.341626 | orchestrator | 2026-04-07 01:03:51 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:51.341736 | orchestrator | 2026-04-07 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:54.384719 | orchestrator | 2026-04-07 01:03:54 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:54.385960 | orchestrator | 2026-04-07 01:03:54 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:54.388056 | orchestrator | 2026-04-07 01:03:54 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:54.389892 | orchestrator | 2026-04-07 01:03:54 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:54.392113 | orchestrator | 2026-04-07 01:03:54 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state STARTED 2026-04-07 01:03:54.392140 | orchestrator | 2026-04-07 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:03:57.469467 | orchestrator | 2026-04-07 01:03:57.469552 | orchestrator | 2026-04-07 01:03:57.469560 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-07 01:03:57.469565 | orchestrator | 2026-04-07 01:03:57.469569 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-07 01:03:57.469574 | orchestrator | Tuesday 07 April 2026 01:02:33 +0000 (0:00:00.302) 0:00:00.302 ********* 2026-04-07 01:03:57.469579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-07 01:03:57.469585 | orchestrator | 2026-04-07 01:03:57.469589 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-07 01:03:57.469593 | orchestrator | Tuesday 07 April 2026 01:02:33 +0000 (0:00:00.209) 0:00:00.511 ********* 2026-04-07 01:03:57.469597 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-07 01:03:57.469601 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-07 01:03:57.469606 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-07 01:03:57.469610 | orchestrator | 2026-04-07 01:03:57.469614 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-07 01:03:57.469618 | orchestrator | Tuesday 07 April 2026 01:02:34 +0000 (0:00:01.476) 0:00:01.988 ********* 2026-04-07 01:03:57.469623 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-07 01:03:57.469628 | orchestrator | 2026-04-07 01:03:57.469700 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-07 01:03:57.469706 | orchestrator | Tuesday 07 April 2026 01:02:35 +0000 (0:00:01.033) 0:00:03.021 ********* 2026-04-07 01:03:57.469710 | orchestrator | changed: [testbed-manager] 2026-04-07 01:03:57.469715 | orchestrator | 2026-04-07 01:03:57.469719 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-07 01:03:57.469723 | orchestrator | Tuesday 07 April 2026 01:02:36 +0000 (0:00:00.772) 0:00:03.793 ********* 2026-04-07 01:03:57.469726 | orchestrator | changed: [testbed-manager] 2026-04-07 01:03:57.469730 | orchestrator | 2026-04-07 01:03:57.469734 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-07 01:03:57.469738 | orchestrator | Tuesday 07 April 2026 01:02:37 +0000 (0:00:00.800) 0:00:04.593 ********* 2026-04-07 01:03:57.469742 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-07 01:03:57.469792 | orchestrator | ok: [testbed-manager] 2026-04-07 01:03:57.469797 | orchestrator | 2026-04-07 01:03:57.469802 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-07 01:03:57.469806 | orchestrator | Tuesday 07 April 2026 01:03:16 +0000 (0:00:38.967) 0:00:43.560 ********* 2026-04-07 01:03:57.469827 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-07 01:03:57.469831 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-07 01:03:57.469835 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-07 01:03:57.469839 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-07 01:03:57.469843 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-07 01:03:57.469846 | orchestrator | 2026-04-07 01:03:57.469850 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-07 01:03:57.469854 | orchestrator | Tuesday 07 April 2026 01:03:20 +0000 (0:00:04.229) 0:00:47.790 ********* 2026-04-07 01:03:57.469859 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-07 01:03:57.470004 | orchestrator | 2026-04-07 01:03:57.470008 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-07 01:03:57.470044 | orchestrator | Tuesday 07 April 2026 01:03:21 +0000 (0:00:00.633) 0:00:48.423 ********* 2026-04-07 01:03:57.470050 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:03:57.470054 | orchestrator | 2026-04-07 01:03:57.470057 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-07 01:03:57.470062 | orchestrator | Tuesday 07 April 2026 01:03:21 +0000 (0:00:00.126) 0:00:48.550 ********* 2026-04-07 01:03:57.470066 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:03:57.470069 | orchestrator | 2026-04-07 01:03:57.470073 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-07 01:03:57.470077 | orchestrator | Tuesday 07 April 2026 01:03:21 +0000 (0:00:00.303) 0:00:48.853 ********* 2026-04-07 01:03:57.470081 | orchestrator | changed: [testbed-manager] 2026-04-07 01:03:57.470085 | orchestrator | 2026-04-07 01:03:57.470089 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-07 01:03:57.470093 | orchestrator | Tuesday 07 April 2026 01:03:23 +0000 (0:00:01.358) 0:00:50.212 ********* 2026-04-07 01:03:57.470097 | orchestrator | changed: [testbed-manager] 2026-04-07 01:03:57.470101 | orchestrator | 2026-04-07 01:03:57.470105 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-07 01:03:57.470109 | orchestrator | Tuesday 07 April 2026 01:03:23 +0000 (0:00:00.727) 0:00:50.940 ********* 2026-04-07 01:03:57.470112 | orchestrator | changed: [testbed-manager] 2026-04-07 01:03:57.470116 | orchestrator | 2026-04-07 01:03:57.470131 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-07 01:03:57.470135 | orchestrator | Tuesday 07 April 2026 01:03:24 +0000 (0:00:00.608) 0:00:51.548 ********* 2026-04-07 01:03:57.470139 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-07 01:03:57.470143 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-07 01:03:57.470146 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-07 01:03:57.470150 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-07 01:03:57.470154 | orchestrator | 2026-04-07 01:03:57.470158 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:03:57.470162 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:03:57.470167 | orchestrator | 2026-04-07 01:03:57.470171 | orchestrator | 2026-04-07 01:03:57.470198 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:03:57.470202 | orchestrator | Tuesday 07 April 2026 01:03:25 +0000 (0:00:01.494) 0:00:53.042 ********* 2026-04-07 01:03:57.470206 | orchestrator | =============================================================================== 2026-04-07 01:03:57.470210 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.97s 2026-04-07 01:03:57.470214 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.23s 2026-04-07 01:03:57.470217 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2026-04-07 01:03:57.470221 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.48s 2026-04-07 01:03:57.470225 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2026-04-07 01:03:57.470235 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.03s 2026-04-07 01:03:57.470240 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.80s 2026-04-07 01:03:57.470243 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.77s 2026-04-07 01:03:57.470247 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-04-07 01:03:57.470251 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.63s 2026-04-07 01:03:57.470255 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-07 01:03:57.470259 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2026-04-07 01:03:57.470262 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-04-07 01:03:57.470266 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-07 01:03:57.470270 | orchestrator | 2026-04-07 01:03:57.470274 | orchestrator | 2026-04-07 01:03:57.470277 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:03:57.470281 | orchestrator | 2026-04-07 01:03:57.470285 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:03:57.470289 | orchestrator | Tuesday 07 April 2026 01:03:29 +0000 (0:00:00.233) 0:00:00.233 ********* 2026-04-07 01:03:57.470293 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.470296 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.470300 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.470304 | orchestrator | 2026-04-07 01:03:57.470308 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:03:57.470312 | orchestrator | Tuesday 07 April 2026 01:03:30 +0000 (0:00:00.422) 0:00:00.656 ********* 2026-04-07 01:03:57.470315 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-07 01:03:57.470319 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-07 01:03:57.470323 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-07 01:03:57.470327 | orchestrator | 2026-04-07 01:03:57.470331 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-07 01:03:57.470335 | orchestrator | 2026-04-07 01:03:57.470354 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-07 01:03:57.470360 | orchestrator | Tuesday 07 April 2026 01:03:30 +0000 (0:00:00.612) 0:00:01.269 ********* 2026-04-07 01:03:57.470366 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.470372 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.470377 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.470383 | orchestrator | 2026-04-07 01:03:57.470389 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:03:57.470396 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:03:57.470402 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:03:57.470405 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:03:57.470409 | orchestrator | 2026-04-07 01:03:57.470413 | orchestrator | 2026-04-07 01:03:57.470417 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:03:57.470421 | orchestrator | Tuesday 07 April 2026 01:03:31 +0000 (0:00:01.208) 0:00:02.478 ********* 2026-04-07 01:03:57.470424 | orchestrator | =============================================================================== 2026-04-07 01:03:57.470428 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.21s 2026-04-07 01:03:57.470432 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-04-07 01:03:57.470436 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-04-07 01:03:57.470443 | orchestrator | 2026-04-07 01:03:57.470447 | orchestrator | 2026-04-07 01:03:57.470451 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:03:57.470454 | orchestrator | 2026-04-07 01:03:57.470462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:03:57.470466 | orchestrator | Tuesday 07 April 2026 01:01:15 +0000 (0:00:00.336) 0:00:00.336 ********* 2026-04-07 01:03:57.470470 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.470474 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.470477 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.470481 | orchestrator | 2026-04-07 01:03:57.470485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:03:57.470488 | orchestrator | Tuesday 07 April 2026 01:01:15 +0000 (0:00:00.276) 0:00:00.613 ********* 2026-04-07 01:03:57.470492 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-07 01:03:57.470496 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-07 01:03:57.470500 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-07 01:03:57.470504 | orchestrator | 2026-04-07 01:03:57.470507 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-07 01:03:57.470511 | orchestrator | 2026-04-07 01:03:57.470531 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.470535 | orchestrator | Tuesday 07 April 2026 01:01:16 +0000 (0:00:00.284) 0:00:00.897 ********* 2026-04-07 01:03:57.470539 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:03:57.470543 | orchestrator | 2026-04-07 01:03:57.470547 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-07 01:03:57.470550 | orchestrator | Tuesday 07 April 2026 01:01:16 +0000 (0:00:00.629) 0:00:01.527 ********* 2026-04-07 01:03:57.470558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470624 | orchestrator | 2026-04-07 01:03:57.470628 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-07 01:03:57.470632 | orchestrator | Tuesday 07 April 2026 01:01:18 +0000 (0:00:02.222) 0:00:03.749 ********* 2026-04-07 01:03:57.470636 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.470641 | orchestrator | 2026-04-07 01:03:57.470646 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-07 01:03:57.470650 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.124) 0:00:03.873 ********* 2026-04-07 01:03:57.470657 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.470662 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.470666 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.470671 | orchestrator | 2026-04-07 01:03:57.470676 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-07 01:03:57.470680 | orchestrator | Tuesday 07 April 2026 01:01:19 +0000 (0:00:00.255) 0:00:04.128 ********* 2026-04-07 01:03:57.470685 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:03:57.470689 | orchestrator | 2026-04-07 01:03:57.470694 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.470698 | orchestrator | Tuesday 07 April 2026 01:01:20 +0000 (0:00:00.884) 0:00:05.012 ********* 2026-04-07 01:03:57.470703 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:03:57.470707 | orchestrator | 2026-04-07 01:03:57.470712 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-07 01:03:57.470727 | orchestrator | Tuesday 07 April 2026 01:01:20 +0000 (0:00:00.679) 0:00:05.692 ********* 2026-04-07 01:03:57.470732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.470754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.470790 | orchestrator | 2026-04-07 01:03:57.470795 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-07 01:03:57.470800 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:03.198) 0:00:08.890 ********* 2026-04-07 01:03:57.470807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.470827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.470832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.470864 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.470879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.470894 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.470898 | orchestrator | 2026-04-07 01:03:57.470902 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-07 01:03:57.470906 | orchestrator | Tuesday 07 April 2026 01:01:24 +0000 (0:00:00.597) 0:00:09.488 ********* 2026-04-07 01:03:57.470910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.470924 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.470932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keys2026-04-07 01:03:57 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:03:57.470938 | orchestrator | 2026-04-07 01:03:57 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:03:57.470942 | orchestrator | 2026-04-07 01:03:57 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:03:57.470946 | orchestrator | 2026-04-07 01:03:57 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:03:57.470949 | orchestrator | 2026-04-07 01:03:57 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:03:57.470953 | orchestrator | 2026-04-07 01:03:57 | INFO  | Task 34a6c7d5-3b9d-4a5e-8eaf-c274d30d239d is in state SUCCESS 2026-04-07 01:03:57.470961 | orchestrator | tone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.470974 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.470980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.470989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.470993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.471000 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471004 | orchestrator | 2026-04-07 01:03:57.471008 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-07 01:03:57.471014 | orchestrator | Tuesday 07 April 2026 01:01:25 +0000 (0:00:00.959) 0:00:10.447 ********* 2026-04-07 01:03:57.471020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471104 | orchestrator | 2026-04-07 01:03:57.471110 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-07 01:03:57.471116 | orchestrator | Tuesday 07 April 2026 01:01:28 +0000 (0:00:03.241) 0:00:13.688 ********* 2026-04-07 01:03:57.471127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471201 | orchestrator | 2026-04-07 01:03:57.471207 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-07 01:03:57.471214 | orchestrator | Tuesday 07 April 2026 01:01:34 +0000 (0:00:05.221) 0:00:18.910 ********* 2026-04-07 01:03:57.471219 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.471226 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:03:57.471232 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:03:57.471237 | orchestrator | 2026-04-07 01:03:57.471244 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-07 01:03:57.471249 | orchestrator | Tuesday 07 April 2026 01:01:35 +0000 (0:00:01.368) 0:00:20.279 ********* 2026-04-07 01:03:57.471255 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471261 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471267 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471274 | orchestrator | 2026-04-07 01:03:57.471280 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-07 01:03:57.471287 | orchestrator | Tuesday 07 April 2026 01:01:36 +0000 (0:00:00.902) 0:00:21.181 ********* 2026-04-07 01:03:57.471293 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471299 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471305 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471310 | orchestrator | 2026-04-07 01:03:57.471320 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-07 01:03:57.471331 | orchestrator | Tuesday 07 April 2026 01:01:36 +0000 (0:00:00.287) 0:00:21.469 ********* 2026-04-07 01:03:57.471335 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471360 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471365 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471368 | orchestrator | 2026-04-07 01:03:57.471372 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-07 01:03:57.471376 | orchestrator | Tuesday 07 April 2026 01:01:37 +0000 (0:00:00.398) 0:00:21.867 ********* 2026-04-07 01:03:57.471392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.471397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.471405 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.471421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.471432 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 01:03:57.471440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 01:03:57.471444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 01:03:57.471448 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471452 | orchestrator | 2026-04-07 01:03:57.471456 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.471459 | orchestrator | Tuesday 07 April 2026 01:01:37 +0000 (0:00:00.570) 0:00:22.437 ********* 2026-04-07 01:03:57.471463 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471470 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471474 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471478 | orchestrator | 2026-04-07 01:03:57.471482 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-07 01:03:57.471485 | orchestrator | Tuesday 07 April 2026 01:01:38 +0000 (0:00:00.457) 0:00:22.894 ********* 2026-04-07 01:03:57.471489 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 01:03:57.471494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 01:03:57.471498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 01:03:57.471502 | orchestrator | 2026-04-07 01:03:57.471506 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-07 01:03:57.471510 | orchestrator | Tuesday 07 April 2026 01:01:39 +0000 (0:00:01.759) 0:00:24.654 ********* 2026-04-07 01:03:57.471514 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:03:57.471517 | orchestrator | 2026-04-07 01:03:57.471521 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-07 01:03:57.471528 | orchestrator | Tuesday 07 April 2026 01:01:40 +0000 (0:00:01.008) 0:00:25.662 ********* 2026-04-07 01:03:57.471531 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471536 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.471541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.471547 | orchestrator | 2026-04-07 01:03:57.471553 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-07 01:03:57.471559 | orchestrator | Tuesday 07 April 2026 01:01:41 +0000 (0:00:00.497) 0:00:26.160 ********* 2026-04-07 01:03:57.471565 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:03:57.471571 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 01:03:57.471578 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 01:03:57.471585 | orchestrator | 2026-04-07 01:03:57.471591 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-07 01:03:57.471598 | orchestrator | Tuesday 07 April 2026 01:01:42 +0000 (0:00:01.156) 0:00:27.317 ********* 2026-04-07 01:03:57.471605 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.471616 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.471623 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.471629 | orchestrator | 2026-04-07 01:03:57.471636 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-07 01:03:57.471643 | orchestrator | Tuesday 07 April 2026 01:01:42 +0000 (0:00:00.441) 0:00:27.759 ********* 2026-04-07 01:03:57.471650 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 01:03:57.471657 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 01:03:57.471663 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 01:03:57.471670 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 01:03:57.471677 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 01:03:57.471684 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 01:03:57.471691 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 01:03:57.471698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 01:03:57.471704 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 01:03:57.471712 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 01:03:57.471718 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 01:03:57.471730 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 01:03:57.471737 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 01:03:57.471743 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 01:03:57.471750 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 01:03:57.471757 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:03:57.471764 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:03:57.471771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:03:57.471778 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:03:57.471784 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:03:57.471791 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:03:57.471798 | orchestrator | 2026-04-07 01:03:57.471805 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-07 01:03:57.471811 | orchestrator | Tuesday 07 April 2026 01:01:52 +0000 (0:00:09.194) 0:00:36.953 ********* 2026-04-07 01:03:57.471818 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:03:57.471825 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:03:57.471831 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:03:57.471838 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:03:57.471845 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:03:57.471851 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:03:57.471858 | orchestrator | 2026-04-07 01:03:57.471865 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-07 01:03:57.471871 | orchestrator | Tuesday 07 April 2026 01:01:54 +0000 (0:00:02.443) 0:00:39.397 ********* 2026-04-07 01:03:57.471887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 01:03:57.471915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 01:03:57.471970 | orchestrator | 2026-04-07 01:03:57.471977 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.471984 | orchestrator | Tuesday 07 April 2026 01:01:57 +0000 (0:00:02.728) 0:00:42.125 ********* 2026-04-07 01:03:57.471991 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.471997 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.472004 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.472011 | orchestrator | 2026-04-07 01:03:57.472018 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-07 01:03:57.472025 | orchestrator | Tuesday 07 April 2026 01:01:57 +0000 (0:00:00.361) 0:00:42.487 ********* 2026-04-07 01:03:57.472032 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472038 | orchestrator | 2026-04-07 01:03:57.472045 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-07 01:03:57.472052 | orchestrator | Tuesday 07 April 2026 01:01:59 +0000 (0:00:01.903) 0:00:44.390 ********* 2026-04-07 01:03:57.472059 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472066 | orchestrator | 2026-04-07 01:03:57.472072 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-07 01:03:57.472079 | orchestrator | Tuesday 07 April 2026 01:02:01 +0000 (0:00:01.986) 0:00:46.376 ********* 2026-04-07 01:03:57.472086 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.472093 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.472100 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.472106 | orchestrator | 2026-04-07 01:03:57.472113 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-07 01:03:57.472120 | orchestrator | Tuesday 07 April 2026 01:02:02 +0000 (0:00:00.897) 0:00:47.273 ********* 2026-04-07 01:03:57.472127 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.472134 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.472140 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.472147 | orchestrator | 2026-04-07 01:03:57.472154 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-07 01:03:57.472161 | orchestrator | Tuesday 07 April 2026 01:02:02 +0000 (0:00:00.362) 0:00:47.636 ********* 2026-04-07 01:03:57.472168 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472174 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.472181 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.472188 | orchestrator | 2026-04-07 01:03:57.472194 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-07 01:03:57.472201 | orchestrator | Tuesday 07 April 2026 01:02:03 +0000 (0:00:00.453) 0:00:48.090 ********* 2026-04-07 01:03:57.472208 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472215 | orchestrator | 2026-04-07 01:03:57.472225 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-07 01:03:57.472243 | orchestrator | Tuesday 07 April 2026 01:02:16 +0000 (0:00:13.135) 0:01:01.225 ********* 2026-04-07 01:03:57.472250 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472257 | orchestrator | 2026-04-07 01:03:57.472263 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 01:03:57.472270 | orchestrator | Tuesday 07 April 2026 01:02:26 +0000 (0:00:09.718) 0:01:10.944 ********* 2026-04-07 01:03:57.472277 | orchestrator | 2026-04-07 01:03:57.472284 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 01:03:57.472290 | orchestrator | Tuesday 07 April 2026 01:02:26 +0000 (0:00:00.064) 0:01:11.009 ********* 2026-04-07 01:03:57.472297 | orchestrator | 2026-04-07 01:03:57.472304 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 01:03:57.472311 | orchestrator | Tuesday 07 April 2026 01:02:26 +0000 (0:00:00.064) 0:01:11.073 ********* 2026-04-07 01:03:57.472317 | orchestrator | 2026-04-07 01:03:57.472327 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-07 01:03:57.472334 | orchestrator | Tuesday 07 April 2026 01:02:26 +0000 (0:00:00.069) 0:01:11.143 ********* 2026-04-07 01:03:57.472384 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472391 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:03:57.472397 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:03:57.472403 | orchestrator | 2026-04-07 01:03:57.472410 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-07 01:03:57.472415 | orchestrator | Tuesday 07 April 2026 01:02:41 +0000 (0:00:15.679) 0:01:26.823 ********* 2026-04-07 01:03:57.472421 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472428 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:03:57.472432 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:03:57.472436 | orchestrator | 2026-04-07 01:03:57.472439 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-07 01:03:57.472443 | orchestrator | Tuesday 07 April 2026 01:02:52 +0000 (0:00:10.197) 0:01:37.020 ********* 2026-04-07 01:03:57.472447 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:03:57.472451 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472454 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:03:57.472458 | orchestrator | 2026-04-07 01:03:57.472462 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.472466 | orchestrator | Tuesday 07 April 2026 01:03:03 +0000 (0:00:10.847) 0:01:47.868 ********* 2026-04-07 01:03:57.472469 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:03:57.472473 | orchestrator | 2026-04-07 01:03:57.472477 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-07 01:03:57.472481 | orchestrator | Tuesday 07 April 2026 01:03:03 +0000 (0:00:00.721) 0:01:48.589 ********* 2026-04-07 01:03:57.472484 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.472488 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:03:57.472492 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:03:57.472496 | orchestrator | 2026-04-07 01:03:57.472499 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-07 01:03:57.472503 | orchestrator | Tuesday 07 April 2026 01:03:04 +0000 (0:00:00.756) 0:01:49.346 ********* 2026-04-07 01:03:57.472507 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:03:57.472511 | orchestrator | 2026-04-07 01:03:57.472514 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-07 01:03:57.472518 | orchestrator | Tuesday 07 April 2026 01:03:06 +0000 (0:00:01.576) 0:01:50.923 ********* 2026-04-07 01:03:57.472522 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-07 01:03:57.472526 | orchestrator | 2026-04-07 01:03:57.472529 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-07 01:03:57.472533 | orchestrator | Tuesday 07 April 2026 01:03:16 +0000 (0:00:10.857) 0:02:01.780 ********* 2026-04-07 01:03:57.472542 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-07 01:03:57.472545 | orchestrator | 2026-04-07 01:03:57.472549 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-07 01:03:57.472553 | orchestrator | Tuesday 07 April 2026 01:03:44 +0000 (0:00:28.015) 0:02:29.795 ********* 2026-04-07 01:03:57.472557 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-07 01:03:57.472561 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-07 01:03:57.472564 | orchestrator | 2026-04-07 01:03:57.472568 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-07 01:03:57.472572 | orchestrator | Tuesday 07 April 2026 01:03:50 +0000 (0:00:05.802) 0:02:35.598 ********* 2026-04-07 01:03:57.472576 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472579 | orchestrator | 2026-04-07 01:03:57.472583 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-07 01:03:57.472587 | orchestrator | Tuesday 07 April 2026 01:03:50 +0000 (0:00:00.109) 0:02:35.707 ********* 2026-04-07 01:03:57.472591 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472595 | orchestrator | 2026-04-07 01:03:57.472598 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-07 01:03:57.472602 | orchestrator | Tuesday 07 April 2026 01:03:50 +0000 (0:00:00.110) 0:02:35.818 ********* 2026-04-07 01:03:57.472606 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472610 | orchestrator | 2026-04-07 01:03:57.472613 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-07 01:03:57.472617 | orchestrator | Tuesday 07 April 2026 01:03:51 +0000 (0:00:00.162) 0:02:35.980 ********* 2026-04-07 01:03:57.472621 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472625 | orchestrator | 2026-04-07 01:03:57.472628 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-07 01:03:57.472632 | orchestrator | Tuesday 07 April 2026 01:03:51 +0000 (0:00:00.389) 0:02:36.369 ********* 2026-04-07 01:03:57.472636 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:03:57.472640 | orchestrator | 2026-04-07 01:03:57.472646 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 01:03:57.472650 | orchestrator | Tuesday 07 April 2026 01:03:54 +0000 (0:00:02.811) 0:02:39.181 ********* 2026-04-07 01:03:57.472654 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:03:57.472659 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:03:57.472666 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:03:57.472671 | orchestrator | 2026-04-07 01:03:57.472678 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:03:57.472687 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 01:03:57.472695 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 01:03:57.472706 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 01:03:57.472712 | orchestrator | 2026-04-07 01:03:57.472718 | orchestrator | 2026-04-07 01:03:57.472723 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:03:57.472729 | orchestrator | Tuesday 07 April 2026 01:03:54 +0000 (0:00:00.462) 0:02:39.644 ********* 2026-04-07 01:03:57.472735 | orchestrator | =============================================================================== 2026-04-07 01:03:57.472741 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.02s 2026-04-07 01:03:57.472747 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.68s 2026-04-07 01:03:57.472753 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.13s 2026-04-07 01:03:57.472759 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.86s 2026-04-07 01:03:57.472771 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.85s 2026-04-07 01:03:57.472778 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.20s 2026-04-07 01:03:57.472784 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.72s 2026-04-07 01:03:57.472791 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.19s 2026-04-07 01:03:57.472795 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.80s 2026-04-07 01:03:57.472799 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.22s 2026-04-07 01:03:57.472803 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.24s 2026-04-07 01:03:57.472807 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.20s 2026-04-07 01:03:57.472811 | orchestrator | keystone : Creating default user role ----------------------------------- 2.81s 2026-04-07 01:03:57.472814 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.73s 2026-04-07 01:03:57.472818 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.44s 2026-04-07 01:03:57.472822 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.22s 2026-04-07 01:03:57.472825 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.99s 2026-04-07 01:03:57.472829 | orchestrator | keystone : Creating keystone database ----------------------------------- 1.90s 2026-04-07 01:03:57.472833 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2026-04-07 01:03:57.472836 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.58s 2026-04-07 01:03:57.472840 | orchestrator | 2026-04-07 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:00.452877 | orchestrator | 2026-04-07 01:04:00 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:00.453263 | orchestrator | 2026-04-07 01:04:00 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:04:00.455432 | orchestrator | 2026-04-07 01:04:00 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:00.456460 | orchestrator | 2026-04-07 01:04:00 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:00.458918 | orchestrator | 2026-04-07 01:04:00 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:04:00.458978 | orchestrator | 2026-04-07 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:03.482866 | orchestrator | 2026-04-07 01:04:03 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:03.483140 | orchestrator | 2026-04-07 01:04:03 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:04:03.484304 | orchestrator | 2026-04-07 01:04:03 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:03.485518 | orchestrator | 2026-04-07 01:04:03 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:03.485546 | orchestrator | 2026-04-07 01:04:03 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:04:03.485566 | orchestrator | 2026-04-07 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:06.533396 | orchestrator | 2026-04-07 01:04:06 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:06.537678 | orchestrator | 2026-04-07 01:04:06 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:04:06.537732 | orchestrator | 2026-04-07 01:04:06 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:06.537762 | orchestrator | 2026-04-07 01:04:06 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:06.537770 | orchestrator | 2026-04-07 01:04:06 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:04:06.537778 | orchestrator | 2026-04-07 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:09.557313 | orchestrator | 2026-04-07 01:04:09 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:09.557642 | orchestrator | 2026-04-07 01:04:09 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state STARTED 2026-04-07 01:04:09.558272 | orchestrator | 2026-04-07 01:04:09 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:09.559133 | orchestrator | 2026-04-07 01:04:09 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:09.559742 | orchestrator | 2026-04-07 01:04:09 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:04:09.559776 | orchestrator | 2026-04-07 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:12.657719 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:12.657777 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 86a11f26-7e23-463b-b687-150aa644d41d is in state SUCCESS 2026-04-07 01:04:12.657787 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:12.657795 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:12.657802 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state STARTED 2026-04-07 01:04:12.657809 | orchestrator | 2026-04-07 01:04:12 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:12.657815 | orchestrator | 2026-04-07 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:15.620717 | orchestrator | 2026-04-07 01:04:15 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:15.623007 | orchestrator | 2026-04-07 01:04:15 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:15.625313 | orchestrator | 2026-04-07 01:04:15 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:15.626276 | orchestrator | 2026-04-07 01:04:15 | INFO  | Task 5dab88f5-82d5-42aa-a91b-2437d52dc353 is in state SUCCESS 2026-04-07 01:04:15.626505 | orchestrator | 2026-04-07 01:04:15.626522 | orchestrator | 2026-04-07 01:04:15.626529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:04:15.626537 | orchestrator | 2026-04-07 01:04:15.626543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:04:15.626554 | orchestrator | Tuesday 07 April 2026 01:03:37 +0000 (0:00:00.416) 0:00:00.416 ********* 2026-04-07 01:04:15.626562 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:04:15.626570 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:04:15.626576 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:04:15.626583 | orchestrator | ok: [testbed-manager] 2026-04-07 01:04:15.626589 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:04:15.626595 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:04:15.626601 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:04:15.626606 | orchestrator | 2026-04-07 01:04:15.626613 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:04:15.626619 | orchestrator | Tuesday 07 April 2026 01:03:38 +0000 (0:00:00.929) 0:00:01.346 ********* 2026-04-07 01:04:15.626624 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626651 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626658 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626665 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626671 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626677 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626683 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-07 01:04:15.626689 | orchestrator | 2026-04-07 01:04:15.626695 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-07 01:04:15.626701 | orchestrator | 2026-04-07 01:04:15.626707 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-07 01:04:15.626714 | orchestrator | Tuesday 07 April 2026 01:03:40 +0000 (0:00:01.743) 0:00:03.090 ********* 2026-04-07 01:04:15.626730 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:04:15.626736 | orchestrator | 2026-04-07 01:04:15.626740 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-07 01:04:15.626743 | orchestrator | Tuesday 07 April 2026 01:03:42 +0000 (0:00:01.633) 0:00:04.723 ********* 2026-04-07 01:04:15.626747 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-07 01:04:15.626752 | orchestrator | 2026-04-07 01:04:15.626756 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-07 01:04:15.626760 | orchestrator | Tuesday 07 April 2026 01:03:46 +0000 (0:00:04.193) 0:00:08.916 ********* 2026-04-07 01:04:15.626765 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-07 01:04:15.626770 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-07 01:04:15.626774 | orchestrator | 2026-04-07 01:04:15.626779 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-07 01:04:15.626804 | orchestrator | Tuesday 07 April 2026 01:03:52 +0000 (0:00:05.837) 0:00:14.754 ********* 2026-04-07 01:04:15.626811 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:04:15.626817 | orchestrator | 2026-04-07 01:04:15.626823 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-07 01:04:15.626830 | orchestrator | Tuesday 07 April 2026 01:03:55 +0000 (0:00:02.858) 0:00:17.613 ********* 2026-04-07 01:04:15.626836 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-07 01:04:15.626842 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:04:15.626848 | orchestrator | 2026-04-07 01:04:15.626855 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-07 01:04:15.626859 | orchestrator | Tuesday 07 April 2026 01:03:58 +0000 (0:00:03.561) 0:00:21.174 ********* 2026-04-07 01:04:15.626863 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:04:15.626867 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-07 01:04:15.626871 | orchestrator | 2026-04-07 01:04:15.626875 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-07 01:04:15.626879 | orchestrator | Tuesday 07 April 2026 01:04:04 +0000 (0:00:05.772) 0:00:26.947 ********* 2026-04-07 01:04:15.626883 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-07 01:04:15.626887 | orchestrator | 2026-04-07 01:04:15.626890 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:04:15.626894 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626898 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626909 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626913 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626916 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626928 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626932 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.626937 | orchestrator | 2026-04-07 01:04:15.626941 | orchestrator | 2026-04-07 01:04:15.626945 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:04:15.626948 | orchestrator | Tuesday 07 April 2026 01:04:09 +0000 (0:00:05.219) 0:00:32.167 ********* 2026-04-07 01:04:15.626952 | orchestrator | =============================================================================== 2026-04-07 01:04:15.626956 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.84s 2026-04-07 01:04:15.626960 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.77s 2026-04-07 01:04:15.626963 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.22s 2026-04-07 01:04:15.626967 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.19s 2026-04-07 01:04:15.626971 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.56s 2026-04-07 01:04:15.626975 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.86s 2026-04-07 01:04:15.626979 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2026-04-07 01:04:15.626982 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.63s 2026-04-07 01:04:15.626986 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-04-07 01:04:15.626990 | orchestrator | 2026-04-07 01:04:15.626994 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 01:04:15.626998 | orchestrator | 2.16.14 2026-04-07 01:04:15.627002 | orchestrator | 2026-04-07 01:04:15.627005 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-07 01:04:15.627009 | orchestrator | 2026-04-07 01:04:15.627017 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-07 01:04:15.627021 | orchestrator | Tuesday 07 April 2026 01:03:30 +0000 (0:00:00.243) 0:00:00.243 ********* 2026-04-07 01:04:15.627024 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627028 | orchestrator | 2026-04-07 01:04:15.627032 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-07 01:04:15.627036 | orchestrator | Tuesday 07 April 2026 01:03:33 +0000 (0:00:02.817) 0:00:03.061 ********* 2026-04-07 01:04:15.627039 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627043 | orchestrator | 2026-04-07 01:04:15.627047 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-07 01:04:15.627051 | orchestrator | Tuesday 07 April 2026 01:03:34 +0000 (0:00:01.259) 0:00:04.320 ********* 2026-04-07 01:04:15.627054 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627058 | orchestrator | 2026-04-07 01:04:15.627062 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-07 01:04:15.627066 | orchestrator | Tuesday 07 April 2026 01:03:36 +0000 (0:00:01.335) 0:00:05.656 ********* 2026-04-07 01:04:15.627070 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627073 | orchestrator | 2026-04-07 01:04:15.627077 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-07 01:04:15.627081 | orchestrator | Tuesday 07 April 2026 01:03:37 +0000 (0:00:01.363) 0:00:07.019 ********* 2026-04-07 01:04:15.627088 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627092 | orchestrator | 2026-04-07 01:04:15.627095 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-07 01:04:15.627099 | orchestrator | Tuesday 07 April 2026 01:03:38 +0000 (0:00:01.154) 0:00:08.174 ********* 2026-04-07 01:04:15.627103 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627107 | orchestrator | 2026-04-07 01:04:15.627111 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-07 01:04:15.627114 | orchestrator | Tuesday 07 April 2026 01:03:39 +0000 (0:00:01.152) 0:00:09.327 ********* 2026-04-07 01:04:15.627118 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627123 | orchestrator | 2026-04-07 01:04:15.627127 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-07 01:04:15.627132 | orchestrator | Tuesday 07 April 2026 01:03:41 +0000 (0:00:01.531) 0:00:10.858 ********* 2026-04-07 01:04:15.627136 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627140 | orchestrator | 2026-04-07 01:04:15.627145 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-07 01:04:15.627149 | orchestrator | Tuesday 07 April 2026 01:03:42 +0000 (0:00:01.248) 0:00:12.107 ********* 2026-04-07 01:04:15.627154 | orchestrator | changed: [testbed-manager] 2026-04-07 01:04:15.627159 | orchestrator | 2026-04-07 01:04:15.627163 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-07 01:04:15.627167 | orchestrator | Tuesday 07 April 2026 01:03:49 +0000 (0:00:07.087) 0:00:19.194 ********* 2026-04-07 01:04:15.627172 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:04:15.627176 | orchestrator | 2026-04-07 01:04:15.627181 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 01:04:15.627186 | orchestrator | 2026-04-07 01:04:15.627190 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 01:04:15.627195 | orchestrator | Tuesday 07 April 2026 01:03:50 +0000 (0:00:00.173) 0:00:19.367 ********* 2026-04-07 01:04:15.627200 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:04:15.627204 | orchestrator | 2026-04-07 01:04:15.627209 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 01:04:15.627213 | orchestrator | 2026-04-07 01:04:15.627218 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 01:04:15.627222 | orchestrator | Tuesday 07 April 2026 01:03:51 +0000 (0:00:01.757) 0:00:21.125 ********* 2026-04-07 01:04:15.627227 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:04:15.627232 | orchestrator | 2026-04-07 01:04:15.627236 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 01:04:15.627241 | orchestrator | 2026-04-07 01:04:15.627245 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 01:04:15.627254 | orchestrator | Tuesday 07 April 2026 01:04:03 +0000 (0:00:11.671) 0:00:32.796 ********* 2026-04-07 01:04:15.627260 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:04:15.627266 | orchestrator | 2026-04-07 01:04:15.627273 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:04:15.627281 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 01:04:15.627289 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.627295 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.627301 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:04:15.627308 | orchestrator | 2026-04-07 01:04:15.627313 | orchestrator | 2026-04-07 01:04:15.627320 | orchestrator | 2026-04-07 01:04:15.627326 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:04:15.627336 | orchestrator | Tuesday 07 April 2026 01:04:15 +0000 (0:00:11.552) 0:00:44.349 ********* 2026-04-07 01:04:15.627342 | orchestrator | =============================================================================== 2026-04-07 01:04:15.627441 | orchestrator | Restart ceph manager service ------------------------------------------- 24.98s 2026-04-07 01:04:15.627448 | orchestrator | Create admin user ------------------------------------------------------- 7.09s 2026-04-07 01:04:15.627454 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.82s 2026-04-07 01:04:15.627459 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.53s 2026-04-07 01:04:15.627470 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.36s 2026-04-07 01:04:15.627477 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.34s 2026-04-07 01:04:15.627483 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.26s 2026-04-07 01:04:15.627488 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.25s 2026-04-07 01:04:15.627493 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.15s 2026-04-07 01:04:15.627500 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.15s 2026-04-07 01:04:15.627506 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-04-07 01:04:15.628201 | orchestrator | 2026-04-07 01:04:15 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:15.628883 | orchestrator | 2026-04-07 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:18.667308 | orchestrator | 2026-04-07 01:04:18 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:18.668772 | orchestrator | 2026-04-07 01:04:18 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:18.670305 | orchestrator | 2026-04-07 01:04:18 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:18.671822 | orchestrator | 2026-04-07 01:04:18 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:18.671864 | orchestrator | 2026-04-07 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:21.696424 | orchestrator | 2026-04-07 01:04:21 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:21.697280 | orchestrator | 2026-04-07 01:04:21 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:21.697324 | orchestrator | 2026-04-07 01:04:21 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:21.697829 | orchestrator | 2026-04-07 01:04:21 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:21.698147 | orchestrator | 2026-04-07 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:24.723134 | orchestrator | 2026-04-07 01:04:24 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:24.725552 | orchestrator | 2026-04-07 01:04:24 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:24.725809 | orchestrator | 2026-04-07 01:04:24 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:24.726632 | orchestrator | 2026-04-07 01:04:24 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:24.726685 | orchestrator | 2026-04-07 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:27.747011 | orchestrator | 2026-04-07 01:04:27 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:27.747103 | orchestrator | 2026-04-07 01:04:27 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:27.749198 | orchestrator | 2026-04-07 01:04:27 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:27.749738 | orchestrator | 2026-04-07 01:04:27 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:27.749772 | orchestrator | 2026-04-07 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:30.792234 | orchestrator | 2026-04-07 01:04:30 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:30.793102 | orchestrator | 2026-04-07 01:04:30 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:30.793566 | orchestrator | 2026-04-07 01:04:30 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:30.794115 | orchestrator | 2026-04-07 01:04:30 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:30.794206 | orchestrator | 2026-04-07 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:33.821530 | orchestrator | 2026-04-07 01:04:33 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:33.824491 | orchestrator | 2026-04-07 01:04:33 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:33.827521 | orchestrator | 2026-04-07 01:04:33 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:33.829323 | orchestrator | 2026-04-07 01:04:33 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:33.829424 | orchestrator | 2026-04-07 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:36.874525 | orchestrator | 2026-04-07 01:04:36 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:36.874647 | orchestrator | 2026-04-07 01:04:36 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:36.874665 | orchestrator | 2026-04-07 01:04:36 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:36.874676 | orchestrator | 2026-04-07 01:04:36 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:36.874689 | orchestrator | 2026-04-07 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:39.894956 | orchestrator | 2026-04-07 01:04:39 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:39.897702 | orchestrator | 2026-04-07 01:04:39 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:39.898237 | orchestrator | 2026-04-07 01:04:39 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:39.898867 | orchestrator | 2026-04-07 01:04:39 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:39.898911 | orchestrator | 2026-04-07 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:42.927826 | orchestrator | 2026-04-07 01:04:42 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:42.929012 | orchestrator | 2026-04-07 01:04:42 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:42.929217 | orchestrator | 2026-04-07 01:04:42 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:42.930673 | orchestrator | 2026-04-07 01:04:42 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:42.930721 | orchestrator | 2026-04-07 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:45.965196 | orchestrator | 2026-04-07 01:04:45 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:45.965395 | orchestrator | 2026-04-07 01:04:45 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:45.966330 | orchestrator | 2026-04-07 01:04:45 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:45.966834 | orchestrator | 2026-04-07 01:04:45 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:45.966879 | orchestrator | 2026-04-07 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:48.988154 | orchestrator | 2026-04-07 01:04:48 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:48.988469 | orchestrator | 2026-04-07 01:04:48 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:48.988920 | orchestrator | 2026-04-07 01:04:48 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:48.989700 | orchestrator | 2026-04-07 01:04:48 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:48.989738 | orchestrator | 2026-04-07 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:52.013801 | orchestrator | 2026-04-07 01:04:52 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:52.014851 | orchestrator | 2026-04-07 01:04:52 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:52.016504 | orchestrator | 2026-04-07 01:04:52 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:52.017128 | orchestrator | 2026-04-07 01:04:52 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:52.017191 | orchestrator | 2026-04-07 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:55.042235 | orchestrator | 2026-04-07 01:04:55 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:55.042981 | orchestrator | 2026-04-07 01:04:55 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:55.043211 | orchestrator | 2026-04-07 01:04:55 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:55.044556 | orchestrator | 2026-04-07 01:04:55 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:55.044599 | orchestrator | 2026-04-07 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:04:58.068838 | orchestrator | 2026-04-07 01:04:58 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:04:58.068922 | orchestrator | 2026-04-07 01:04:58 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:04:58.069700 | orchestrator | 2026-04-07 01:04:58 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:04:58.070319 | orchestrator | 2026-04-07 01:04:58 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:04:58.070707 | orchestrator | 2026-04-07 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:01.101211 | orchestrator | 2026-04-07 01:05:01 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:01.101929 | orchestrator | 2026-04-07 01:05:01 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:01.103336 | orchestrator | 2026-04-07 01:05:01 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:01.104947 | orchestrator | 2026-04-07 01:05:01 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:01.105048 | orchestrator | 2026-04-07 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:04.128870 | orchestrator | 2026-04-07 01:05:04 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:04.129721 | orchestrator | 2026-04-07 01:05:04 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:04.130996 | orchestrator | 2026-04-07 01:05:04 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:04.132179 | orchestrator | 2026-04-07 01:05:04 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:04.132295 | orchestrator | 2026-04-07 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:07.180307 | orchestrator | 2026-04-07 01:05:07 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:07.180659 | orchestrator | 2026-04-07 01:05:07 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:07.182000 | orchestrator | 2026-04-07 01:05:07 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:07.184631 | orchestrator | 2026-04-07 01:05:07 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:07.184689 | orchestrator | 2026-04-07 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:10.252201 | orchestrator | 2026-04-07 01:05:10 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:10.252270 | orchestrator | 2026-04-07 01:05:10 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:10.252276 | orchestrator | 2026-04-07 01:05:10 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:10.252281 | orchestrator | 2026-04-07 01:05:10 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:10.252286 | orchestrator | 2026-04-07 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:13.266521 | orchestrator | 2026-04-07 01:05:13 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:13.266601 | orchestrator | 2026-04-07 01:05:13 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:13.267308 | orchestrator | 2026-04-07 01:05:13 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:13.269131 | orchestrator | 2026-04-07 01:05:13 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:13.269172 | orchestrator | 2026-04-07 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:16.365725 | orchestrator | 2026-04-07 01:05:16 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:16.366707 | orchestrator | 2026-04-07 01:05:16 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:16.368152 | orchestrator | 2026-04-07 01:05:16 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:16.369533 | orchestrator | 2026-04-07 01:05:16 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:16.369828 | orchestrator | 2026-04-07 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:19.404928 | orchestrator | 2026-04-07 01:05:19 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:19.405312 | orchestrator | 2026-04-07 01:05:19 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:19.406345 | orchestrator | 2026-04-07 01:05:19 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:19.408243 | orchestrator | 2026-04-07 01:05:19 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:19.408297 | orchestrator | 2026-04-07 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:22.444041 | orchestrator | 2026-04-07 01:05:22 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:22.446192 | orchestrator | 2026-04-07 01:05:22 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:22.449339 | orchestrator | 2026-04-07 01:05:22 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:22.449419 | orchestrator | 2026-04-07 01:05:22 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:22.449430 | orchestrator | 2026-04-07 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:25.491637 | orchestrator | 2026-04-07 01:05:25 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:25.493105 | orchestrator | 2026-04-07 01:05:25 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:25.495144 | orchestrator | 2026-04-07 01:05:25 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:25.496983 | orchestrator | 2026-04-07 01:05:25 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:25.497084 | orchestrator | 2026-04-07 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:28.545345 | orchestrator | 2026-04-07 01:05:28 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:28.545666 | orchestrator | 2026-04-07 01:05:28 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:28.548160 | orchestrator | 2026-04-07 01:05:28 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:28.548211 | orchestrator | 2026-04-07 01:05:28 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:28.548218 | orchestrator | 2026-04-07 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:31.605443 | orchestrator | 2026-04-07 01:05:31 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:31.607743 | orchestrator | 2026-04-07 01:05:31 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:31.609522 | orchestrator | 2026-04-07 01:05:31 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:31.611301 | orchestrator | 2026-04-07 01:05:31 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:31.611346 | orchestrator | 2026-04-07 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:34.666578 | orchestrator | 2026-04-07 01:05:34 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:34.669312 | orchestrator | 2026-04-07 01:05:34 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:34.672849 | orchestrator | 2026-04-07 01:05:34 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:34.675742 | orchestrator | 2026-04-07 01:05:34 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:34.675821 | orchestrator | 2026-04-07 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:37.720518 | orchestrator | 2026-04-07 01:05:37 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:37.720636 | orchestrator | 2026-04-07 01:05:37 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:37.721779 | orchestrator | 2026-04-07 01:05:37 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:37.723272 | orchestrator | 2026-04-07 01:05:37 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:37.723308 | orchestrator | 2026-04-07 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:40.752720 | orchestrator | 2026-04-07 01:05:40 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:40.754653 | orchestrator | 2026-04-07 01:05:40 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:40.756365 | orchestrator | 2026-04-07 01:05:40 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:40.759484 | orchestrator | 2026-04-07 01:05:40 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:40.759546 | orchestrator | 2026-04-07 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:43.806971 | orchestrator | 2026-04-07 01:05:43 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:43.809079 | orchestrator | 2026-04-07 01:05:43 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:43.811249 | orchestrator | 2026-04-07 01:05:43 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:43.814259 | orchestrator | 2026-04-07 01:05:43 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:43.814308 | orchestrator | 2026-04-07 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:46.858521 | orchestrator | 2026-04-07 01:05:46 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:46.861284 | orchestrator | 2026-04-07 01:05:46 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:46.864456 | orchestrator | 2026-04-07 01:05:46 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:46.866620 | orchestrator | 2026-04-07 01:05:46 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:46.866911 | orchestrator | 2026-04-07 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:49.898560 | orchestrator | 2026-04-07 01:05:49 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:49.898654 | orchestrator | 2026-04-07 01:05:49 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:49.899047 | orchestrator | 2026-04-07 01:05:49 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:49.899554 | orchestrator | 2026-04-07 01:05:49 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:49.899586 | orchestrator | 2026-04-07 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:52.939188 | orchestrator | 2026-04-07 01:05:52 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:52.939263 | orchestrator | 2026-04-07 01:05:52 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:52.939270 | orchestrator | 2026-04-07 01:05:52 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:52.997472 | orchestrator | 2026-04-07 01:05:52 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:52.997595 | orchestrator | 2026-04-07 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:55.973296 | orchestrator | 2026-04-07 01:05:55 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:55.974687 | orchestrator | 2026-04-07 01:05:55 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:55.975638 | orchestrator | 2026-04-07 01:05:55 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:55.977132 | orchestrator | 2026-04-07 01:05:55 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:55.977176 | orchestrator | 2026-04-07 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:05:59.038553 | orchestrator | 2026-04-07 01:05:59 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:05:59.039215 | orchestrator | 2026-04-07 01:05:59 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:05:59.040485 | orchestrator | 2026-04-07 01:05:59 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:05:59.041244 | orchestrator | 2026-04-07 01:05:59 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:05:59.041464 | orchestrator | 2026-04-07 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:02.082081 | orchestrator | 2026-04-07 01:06:02 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:02.082603 | orchestrator | 2026-04-07 01:06:02 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:02.083310 | orchestrator | 2026-04-07 01:06:02 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:02.084131 | orchestrator | 2026-04-07 01:06:02 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:02.084148 | orchestrator | 2026-04-07 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:05.126550 | orchestrator | 2026-04-07 01:06:05 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:05.126640 | orchestrator | 2026-04-07 01:06:05 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:05.126650 | orchestrator | 2026-04-07 01:06:05 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:05.126656 | orchestrator | 2026-04-07 01:06:05 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:05.126662 | orchestrator | 2026-04-07 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:08.154518 | orchestrator | 2026-04-07 01:06:08 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:08.154605 | orchestrator | 2026-04-07 01:06:08 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:08.154616 | orchestrator | 2026-04-07 01:06:08 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:08.154624 | orchestrator | 2026-04-07 01:06:08 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:08.154632 | orchestrator | 2026-04-07 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:11.177801 | orchestrator | 2026-04-07 01:06:11 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:11.177971 | orchestrator | 2026-04-07 01:06:11 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:11.179034 | orchestrator | 2026-04-07 01:06:11 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:11.179889 | orchestrator | 2026-04-07 01:06:11 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:11.179966 | orchestrator | 2026-04-07 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:14.234050 | orchestrator | 2026-04-07 01:06:14 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:14.234999 | orchestrator | 2026-04-07 01:06:14 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:14.237059 | orchestrator | 2026-04-07 01:06:14 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:14.238645 | orchestrator | 2026-04-07 01:06:14 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:14.238678 | orchestrator | 2026-04-07 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:17.279162 | orchestrator | 2026-04-07 01:06:17 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:17.280936 | orchestrator | 2026-04-07 01:06:17 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:17.282471 | orchestrator | 2026-04-07 01:06:17 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:17.284158 | orchestrator | 2026-04-07 01:06:17 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:17.284203 | orchestrator | 2026-04-07 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:20.329870 | orchestrator | 2026-04-07 01:06:20 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state STARTED 2026-04-07 01:06:20.333080 | orchestrator | 2026-04-07 01:06:20 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:20.335356 | orchestrator | 2026-04-07 01:06:20 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:20.336297 | orchestrator | 2026-04-07 01:06:20 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:20.336923 | orchestrator | 2026-04-07 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:23.384784 | orchestrator | 2026-04-07 01:06:23 | INFO  | Task 8c8e1eca-9d8f-41b9-802f-12047effec16 is in state SUCCESS 2026-04-07 01:06:23.386295 | orchestrator | 2026-04-07 01:06:23.386371 | orchestrator | 2026-04-07 01:06:23.386384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:06:23.386461 | orchestrator | 2026-04-07 01:06:23.386482 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:06:23.386489 | orchestrator | Tuesday 07 April 2026 01:03:37 +0000 (0:00:00.362) 0:00:00.362 ********* 2026-04-07 01:06:23.386495 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:06:23.386502 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:06:23.386509 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:06:23.386515 | orchestrator | 2026-04-07 01:06:23.386521 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:06:23.386526 | orchestrator | Tuesday 07 April 2026 01:03:37 +0000 (0:00:00.326) 0:00:00.689 ********* 2026-04-07 01:06:23.386532 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-07 01:06:23.386538 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-07 01:06:23.386544 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-07 01:06:23.386550 | orchestrator | 2026-04-07 01:06:23.386556 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-07 01:06:23.386561 | orchestrator | 2026-04-07 01:06:23.386568 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 01:06:23.386594 | orchestrator | Tuesday 07 April 2026 01:03:38 +0000 (0:00:00.359) 0:00:01.049 ********* 2026-04-07 01:06:23.386601 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:23.386609 | orchestrator | 2026-04-07 01:06:23.386615 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-07 01:06:23.386621 | orchestrator | Tuesday 07 April 2026 01:03:38 +0000 (0:00:00.738) 0:00:01.787 ********* 2026-04-07 01:06:23.386627 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-07 01:06:23.386633 | orchestrator | 2026-04-07 01:06:23.386639 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-07 01:06:23.386646 | orchestrator | Tuesday 07 April 2026 01:03:43 +0000 (0:00:04.755) 0:00:06.542 ********* 2026-04-07 01:06:23.386652 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-07 01:06:23.386658 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-07 01:06:23.386664 | orchestrator | 2026-04-07 01:06:23.386670 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-07 01:06:23.386675 | orchestrator | Tuesday 07 April 2026 01:03:50 +0000 (0:00:06.416) 0:00:12.959 ********* 2026-04-07 01:06:23.386681 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-07 01:06:23.386687 | orchestrator | 2026-04-07 01:06:23.386694 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-07 01:06:23.386700 | orchestrator | Tuesday 07 April 2026 01:03:53 +0000 (0:00:03.073) 0:00:16.033 ********* 2026-04-07 01:06:23.386706 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-07 01:06:23.386713 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:06:23.386719 | orchestrator | 2026-04-07 01:06:23.386726 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-07 01:06:23.386732 | orchestrator | Tuesday 07 April 2026 01:03:56 +0000 (0:00:03.524) 0:00:19.557 ********* 2026-04-07 01:06:23.386737 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:06:23.386743 | orchestrator | 2026-04-07 01:06:23.386749 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-07 01:06:23.386755 | orchestrator | Tuesday 07 April 2026 01:04:00 +0000 (0:00:03.328) 0:00:22.886 ********* 2026-04-07 01:06:23.386761 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-07 01:06:23.386766 | orchestrator | 2026-04-07 01:06:23.386772 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-07 01:06:23.386778 | orchestrator | Tuesday 07 April 2026 01:04:04 +0000 (0:00:03.944) 0:00:26.830 ********* 2026-04-07 01:06:23.386814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.386833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.386840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.386847 | orchestrator | 2026-04-07 01:06:23.386853 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 01:06:23.386859 | orchestrator | Tuesday 07 April 2026 01:04:10 +0000 (0:00:06.199) 0:00:33.029 ********* 2026-04-07 01:06:23.386871 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:23.386877 | orchestrator | 2026-04-07 01:06:23.386883 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-07 01:06:23.386894 | orchestrator | Tuesday 07 April 2026 01:04:10 +0000 (0:00:00.547) 0:00:33.577 ********* 2026-04-07 01:06:23.386900 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:23.386911 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.386917 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:23.386923 | orchestrator | 2026-04-07 01:06:23.386929 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-07 01:06:23.386935 | orchestrator | Tuesday 07 April 2026 01:04:14 +0000 (0:00:03.413) 0:00:36.991 ********* 2026-04-07 01:06:23.386941 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386955 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386961 | orchestrator | 2026-04-07 01:06:23.386969 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-07 01:06:23.386974 | orchestrator | Tuesday 07 April 2026 01:04:15 +0000 (0:00:01.744) 0:00:38.735 ********* 2026-04-07 01:06:23.386979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386988 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:23.386992 | orchestrator | 2026-04-07 01:06:23.386996 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-07 01:06:23.387001 | orchestrator | Tuesday 07 April 2026 01:04:17 +0000 (0:00:01.345) 0:00:40.080 ********* 2026-04-07 01:06:23.387006 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:06:23.387011 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:06:23.387015 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:06:23.387020 | orchestrator | 2026-04-07 01:06:23.387025 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-07 01:06:23.387029 | orchestrator | Tuesday 07 April 2026 01:04:17 +0000 (0:00:00.664) 0:00:40.745 ********* 2026-04-07 01:06:23.387034 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387038 | orchestrator | 2026-04-07 01:06:23.387043 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-07 01:06:23.387048 | orchestrator | Tuesday 07 April 2026 01:04:18 +0000 (0:00:00.122) 0:00:40.868 ********* 2026-04-07 01:06:23.387054 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387060 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387066 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387072 | orchestrator | 2026-04-07 01:06:23.387078 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 01:06:23.387084 | orchestrator | Tuesday 07 April 2026 01:04:18 +0000 (0:00:00.199) 0:00:41.068 ********* 2026-04-07 01:06:23.387091 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:23.387097 | orchestrator | 2026-04-07 01:06:23.387102 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-07 01:06:23.387109 | orchestrator | Tuesday 07 April 2026 01:04:18 +0000 (0:00:00.481) 0:00:41.549 ********* 2026-04-07 01:06:23.387117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387160 | orchestrator | 2026-04-07 01:06:23.387164 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-07 01:06:23.387169 | orchestrator | Tuesday 07 April 2026 01:04:22 +0000 (0:00:03.626) 0:00:45.176 ********* 2026-04-07 01:06:23.387183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387188 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387203 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387221 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387225 | orchestrator | 2026-04-07 01:06:23.387230 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-07 01:06:23.387234 | orchestrator | Tuesday 07 April 2026 01:04:26 +0000 (0:00:03.710) 0:00:48.886 ********* 2026-04-07 01:06:23.387239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387249 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387275 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 01:06:23.387475 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387479 | orchestrator | 2026-04-07 01:06:23.387483 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-07 01:06:23.387487 | orchestrator | Tuesday 07 April 2026 01:04:28 +0000 (0:00:02.902) 0:00:51.789 ********* 2026-04-07 01:06:23.387491 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387495 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387509 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387513 | orchestrator | 2026-04-07 01:06:23.387517 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-07 01:06:23.387521 | orchestrator | Tuesday 07 April 2026 01:04:31 +0000 (0:00:02.886) 0:00:54.676 ********* 2026-04-07 01:06:23.387525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387553 | orchestrator | 2026-04-07 01:06:23.387557 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-07 01:06:23.387561 | orchestrator | Tuesday 07 April 2026 01:04:36 +0000 (0:00:04.215) 0:00:58.891 ********* 2026-04-07 01:06:23.387565 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:23.387569 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.387572 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:23.387576 | orchestrator | 2026-04-07 01:06:23.387581 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-07 01:06:23.387584 | orchestrator | Tuesday 07 April 2026 01:04:42 +0000 (0:00:06.399) 0:01:05.291 ********* 2026-04-07 01:06:23.387588 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387592 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387596 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387600 | orchestrator | 2026-04-07 01:06:23.387603 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-07 01:06:23.387607 | orchestrator | Tuesday 07 April 2026 01:04:46 +0000 (0:00:04.133) 0:01:09.425 ********* 2026-04-07 01:06:23.387611 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387615 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387619 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387622 | orchestrator | 2026-04-07 01:06:23.387628 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-07 01:06:23.387634 | orchestrator | Tuesday 07 April 2026 01:04:50 +0000 (0:00:04.254) 0:01:13.679 ********* 2026-04-07 01:06:23.387641 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387646 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387659 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387669 | orchestrator | 2026-04-07 01:06:23.387675 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-07 01:06:23.387685 | orchestrator | Tuesday 07 April 2026 01:04:54 +0000 (0:00:04.034) 0:01:17.714 ********* 2026-04-07 01:06:23.387691 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387697 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387702 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387708 | orchestrator | 2026-04-07 01:06:23.387714 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-07 01:06:23.387720 | orchestrator | Tuesday 07 April 2026 01:04:58 +0000 (0:00:03.955) 0:01:21.669 ********* 2026-04-07 01:06:23.387726 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387732 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387738 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387744 | orchestrator | 2026-04-07 01:06:23.387757 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-07 01:06:23.387762 | orchestrator | Tuesday 07 April 2026 01:04:59 +0000 (0:00:00.374) 0:01:22.043 ********* 2026-04-07 01:06:23.387768 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 01:06:23.387775 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387781 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 01:06:23.387787 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387793 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 01:06:23.387799 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387805 | orchestrator | 2026-04-07 01:06:23.387811 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-07 01:06:23.387816 | orchestrator | Tuesday 07 April 2026 01:05:02 +0000 (0:00:03.425) 0:01:25.469 ********* 2026-04-07 01:06:23.387822 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387836 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387840 | orchestrator | 2026-04-07 01:06:23.387844 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-07 01:06:23.387848 | orchestrator | Tuesday 07 April 2026 01:05:06 +0000 (0:00:04.205) 0:01:29.674 ********* 2026-04-07 01:06:23.387852 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.387855 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.387859 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.387863 | orchestrator | 2026-04-07 01:06:23.387867 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-07 01:06:23.387870 | orchestrator | Tuesday 07 April 2026 01:05:11 +0000 (0:00:05.114) 0:01:34.789 ********* 2026-04-07 01:06:23.387875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.387992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 01:06:23.388002 | orchestrator | 2026-04-07 01:06:23.388011 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 01:06:23.388017 | orchestrator | Tuesday 07 April 2026 01:05:18 +0000 (0:00:06.195) 0:01:40.984 ********* 2026-04-07 01:06:23.388022 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:23.388028 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:23.388034 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:23.388040 | orchestrator | 2026-04-07 01:06:23.388046 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-07 01:06:23.388052 | orchestrator | Tuesday 07 April 2026 01:05:18 +0000 (0:00:00.785) 0:01:41.770 ********* 2026-04-07 01:06:23.388058 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388063 | orchestrator | 2026-04-07 01:06:23.388069 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-07 01:06:23.388076 | orchestrator | Tuesday 07 April 2026 01:05:20 +0000 (0:00:01.878) 0:01:43.649 ********* 2026-04-07 01:06:23.388081 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388087 | orchestrator | 2026-04-07 01:06:23.388098 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-07 01:06:23.388104 | orchestrator | Tuesday 07 April 2026 01:05:22 +0000 (0:00:01.974) 0:01:45.623 ********* 2026-04-07 01:06:23.388110 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388117 | orchestrator | 2026-04-07 01:06:23.388123 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-07 01:06:23.388129 | orchestrator | Tuesday 07 April 2026 01:05:24 +0000 (0:00:01.723) 0:01:47.347 ********* 2026-04-07 01:06:23.388136 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388143 | orchestrator | 2026-04-07 01:06:23.388149 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-07 01:06:23.388156 | orchestrator | Tuesday 07 April 2026 01:05:48 +0000 (0:00:24.001) 0:02:11.348 ********* 2026-04-07 01:06:23.388164 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388170 | orchestrator | 2026-04-07 01:06:23.388184 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 01:06:23.388196 | orchestrator | Tuesday 07 April 2026 01:05:50 +0000 (0:00:02.025) 0:02:13.374 ********* 2026-04-07 01:06:23.388202 | orchestrator | 2026-04-07 01:06:23.388210 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 01:06:23.388216 | orchestrator | Tuesday 07 April 2026 01:05:50 +0000 (0:00:00.192) 0:02:13.566 ********* 2026-04-07 01:06:23.388223 | orchestrator | 2026-04-07 01:06:23.388229 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 01:06:23.388235 | orchestrator | Tuesday 07 April 2026 01:05:50 +0000 (0:00:00.202) 0:02:13.769 ********* 2026-04-07 01:06:23.388242 | orchestrator | 2026-04-07 01:06:23.388250 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-07 01:06:23.388256 | orchestrator | Tuesday 07 April 2026 01:05:51 +0000 (0:00:00.177) 0:02:13.946 ********* 2026-04-07 01:06:23.388263 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:23.388270 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:23.388275 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:23.388279 | orchestrator | 2026-04-07 01:06:23.388284 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:06:23.388290 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-07 01:06:23.388297 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-07 01:06:23.388302 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-07 01:06:23.388306 | orchestrator | 2026-04-07 01:06:23.388311 | orchestrator | 2026-04-07 01:06:23.388315 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:06:23.388320 | orchestrator | Tuesday 07 April 2026 01:06:20 +0000 (0:00:29.019) 0:02:42.965 ********* 2026-04-07 01:06:23.388325 | orchestrator | =============================================================================== 2026-04-07 01:06:23.388329 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.02s 2026-04-07 01:06:23.388334 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.00s 2026-04-07 01:06:23.388339 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.42s 2026-04-07 01:06:23.388344 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.40s 2026-04-07 01:06:23.388349 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.20s 2026-04-07 01:06:23.388354 | orchestrator | glance : Check glance containers ---------------------------------------- 6.20s 2026-04-07 01:06:23.388358 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 5.12s 2026-04-07 01:06:23.388361 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.76s 2026-04-07 01:06:23.388370 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.25s 2026-04-07 01:06:23.388374 | orchestrator | glance : Copying over config.json files for services -------------------- 4.22s 2026-04-07 01:06:23.388378 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.21s 2026-04-07 01:06:23.388382 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.13s 2026-04-07 01:06:23.388386 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.03s 2026-04-07 01:06:23.388410 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.95s 2026-04-07 01:06:23.388414 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.94s 2026-04-07 01:06:23.388418 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.71s 2026-04-07 01:06:23.388422 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.63s 2026-04-07 01:06:23.388426 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.52s 2026-04-07 01:06:23.388430 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.43s 2026-04-07 01:06:23.388434 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.41s 2026-04-07 01:06:23.388438 | orchestrator | 2026-04-07 01:06:23 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:23.389207 | orchestrator | 2026-04-07 01:06:23 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:23.390937 | orchestrator | 2026-04-07 01:06:23 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:23.392682 | orchestrator | 2026-04-07 01:06:23 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:23.392731 | orchestrator | 2026-04-07 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:26.426274 | orchestrator | 2026-04-07 01:06:26 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:26.426778 | orchestrator | 2026-04-07 01:06:26 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:26.427796 | orchestrator | 2026-04-07 01:06:26 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:26.428649 | orchestrator | 2026-04-07 01:06:26 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:26.428687 | orchestrator | 2026-04-07 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:29.461672 | orchestrator | 2026-04-07 01:06:29 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:29.462154 | orchestrator | 2026-04-07 01:06:29 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:29.465292 | orchestrator | 2026-04-07 01:06:29 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:29.466491 | orchestrator | 2026-04-07 01:06:29 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:29.466718 | orchestrator | 2026-04-07 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:32.501926 | orchestrator | 2026-04-07 01:06:32 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:32.502516 | orchestrator | 2026-04-07 01:06:32 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:32.503490 | orchestrator | 2026-04-07 01:06:32 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:32.504131 | orchestrator | 2026-04-07 01:06:32 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:32.504179 | orchestrator | 2026-04-07 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:35.541484 | orchestrator | 2026-04-07 01:06:35 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state STARTED 2026-04-07 01:06:35.542045 | orchestrator | 2026-04-07 01:06:35 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:35.542709 | orchestrator | 2026-04-07 01:06:35 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:35.543502 | orchestrator | 2026-04-07 01:06:35 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:35.543523 | orchestrator | 2026-04-07 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:38.598974 | orchestrator | 2026-04-07 01:06:38 | INFO  | Task 7ae4a6a0-e9f1-482d-8ca8-5b0288fc812e is in state SUCCESS 2026-04-07 01:06:38.600757 | orchestrator | 2026-04-07 01:06:38.600799 | orchestrator | 2026-04-07 01:06:38.600806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:06:38.600812 | orchestrator | 2026-04-07 01:06:38.600818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:06:38.600823 | orchestrator | Tuesday 07 April 2026 01:03:29 +0000 (0:00:00.395) 0:00:00.395 ********* 2026-04-07 01:06:38.600829 | orchestrator | ok: [testbed-manager] 2026-04-07 01:06:38.600835 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:06:38.600840 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:06:38.600846 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:06:38.600851 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:06:38.600856 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:06:38.600861 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:06:38.600866 | orchestrator | 2026-04-07 01:06:38.600871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:06:38.600877 | orchestrator | Tuesday 07 April 2026 01:03:30 +0000 (0:00:00.867) 0:00:01.262 ********* 2026-04-07 01:06:38.600882 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600887 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600893 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600898 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600903 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600908 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600913 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-07 01:06:38.600919 | orchestrator | 2026-04-07 01:06:38.600924 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-07 01:06:38.600929 | orchestrator | 2026-04-07 01:06:38.600934 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-07 01:06:38.600947 | orchestrator | Tuesday 07 April 2026 01:03:31 +0000 (0:00:00.979) 0:00:02.242 ********* 2026-04-07 01:06:38.600957 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:06:38.600963 | orchestrator | 2026-04-07 01:06:38.600968 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-07 01:06:38.600974 | orchestrator | Tuesday 07 April 2026 01:03:32 +0000 (0:00:01.288) 0:00:03.530 ********* 2026-04-07 01:06:38.600989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 01:06:38.601021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 01:06:38.601369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601471 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601522 | orchestrator | 2026-04-07 01:06:38.601528 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-07 01:06:38.601534 | orchestrator | Tuesday 07 April 2026 01:03:37 +0000 (0:00:04.475) 0:00:08.006 ********* 2026-04-07 01:06:38.601539 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:06:38.601548 | orchestrator | 2026-04-07 01:06:38.601554 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-07 01:06:38.601559 | orchestrator | Tuesday 07 April 2026 01:03:38 +0000 (0:00:01.470) 0:00:09.476 ********* 2026-04-07 01:06:38.601564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601578 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 01:06:38.601584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601801 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.601806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601900 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 01:06:38.601909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.601920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601926 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.601948 | orchestrator | 2026-04-07 01:06:38.601953 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-07 01:06:38.601958 | orchestrator | Tuesday 07 April 2026 01:03:44 +0000 (0:00:05.985) 0:00:15.461 ********* 2026-04-07 01:06:38.601968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 01:06:38.601975 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.601981 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.601987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.601993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 01:06:38.602046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602074 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602081 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.602086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602184 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.602188 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.602192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602302 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.602314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602325 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.602328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602340 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.602343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602430 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.602435 | orchestrator | 2026-04-07 01:06:38.602441 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-07 01:06:38.602446 | orchestrator | Tuesday 07 April 2026 01:03:46 +0000 (0:00:01.231) 0:00:16.693 ********* 2026-04-07 01:06:38.602451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 01:06:38.602457 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602470 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 01:06:38.602479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602616 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.602621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 01:06:38.602749 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.602845 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.602851 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.602868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602886 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.602891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602914 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.602919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 01:06:38.602924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 01:06:38.602947 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.602952 | orchestrator | 2026-04-07 01:06:38.602980 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-07 01:06:38.602986 | orchestrator | Tuesday 07 April 2026 01:03:47 +0000 (0:00:01.760) 0:00:18.453 ********* 2026-04-07 01:06:38.602992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 01:06:38.602998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603030 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.603061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603074 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603123 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 01:06:38.603135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603176 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.603208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.603219 | orchestrator | 2026-04-07 01:06:38.603225 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-07 01:06:38.603230 | orchestrator | Tuesday 07 April 2026 01:03:54 +0000 (0:00:06.594) 0:00:25.048 ********* 2026-04-07 01:06:38.603236 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:06:38.603241 | orchestrator | 2026-04-07 01:06:38.603247 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-07 01:06:38.603265 | orchestrator | Tuesday 07 April 2026 01:03:55 +0000 (0:00:00.871) 0:00:25.919 ********* 2026-04-07 01:06:38.603271 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603277 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603286 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603294 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603325 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603331 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603337 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312790, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603346 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603355 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603360 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603366 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603383 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603389 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603422 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603427 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603435 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603440 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603446 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603467 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1312835, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7023618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603482 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603495 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603500 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603505 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603510 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603527 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603541 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603546 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603554 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603560 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603588 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603600 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1312780, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6945622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603605 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603608 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603612 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603629 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603634 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603637 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603647 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603651 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603655 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603672 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603676 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603680 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603685 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603689 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603712 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312813, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6985922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603721 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603727 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603741 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603754 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603758 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603762 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603768 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603772 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603775 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603782 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603796 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603800 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312776, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.693533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603804 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603807 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603813 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603824 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603838 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603843 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603849 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603859 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603865 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603895 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312792, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603901 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603925 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603934 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603940 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603946 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603952 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603960 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603970 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603976 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603984 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1312805, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6980731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.603990 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.603996 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604002 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604008 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604016 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604031 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604045 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604056 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604061 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604073 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604079 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604084 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312794, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604098 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604103 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604109 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604114 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604120 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604126 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 01:06:38.604137 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604143 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312786, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.695533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604149 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312825, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7005332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604154 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312770, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6926548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604162 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312872, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7075334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604168 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312817, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6995332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604174 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312777, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6940777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604182 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1312774, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6930249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312802, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604196 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312798, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6969042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604202 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312865, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.7055333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:06:38.604208 | orchestrator | 2026-04-07 01:06:38.604265 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-07 01:06:38.604272 | orchestrator | Tuesday 07 April 2026 01:04:20 +0000 (0:00:25.279) 0:00:51.198 ********* 2026-04-07 01:06:38.604277 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:06:38.604283 | orchestrator | 2026-04-07 01:06:38.604291 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-07 01:06:38.604296 | orchestrator | Tuesday 07 April 2026 01:04:21 +0000 (0:00:00.738) 0:00:51.937 ********* 2026-04-07 01:06:38.604301 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604313 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604324 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604330 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:06:38.604335 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604341 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604346 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604361 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604366 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 01:06:38.604371 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604376 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604381 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604386 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604403 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604408 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:06:38.604413 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604423 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604434 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604438 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:06:38.604443 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604448 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604453 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604458 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604463 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604468 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 01:06:38.604473 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604478 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604488 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604499 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604504 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:06:38.604510 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604520 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-07 01:06:38.604524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 01:06:38.604527 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-07 01:06:38.604531 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:06:38.604534 | orchestrator | 2026-04-07 01:06:38.604537 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-07 01:06:38.604540 | orchestrator | Tuesday 07 April 2026 01:04:23 +0000 (0:00:01.768) 0:00:53.705 ********* 2026-04-07 01:06:38.604543 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604547 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604550 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604553 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604556 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604559 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604562 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604566 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604569 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604578 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 01:06:38.604581 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604584 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-07 01:06:38.604587 | orchestrator | 2026-04-07 01:06:38.604591 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-07 01:06:38.604594 | orchestrator | Tuesday 07 April 2026 01:04:38 +0000 (0:00:15.946) 0:01:09.653 ********* 2026-04-07 01:06:38.604597 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604603 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604606 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604609 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604612 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604615 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604618 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604621 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604624 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604627 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604630 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 01:06:38.604633 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604636 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-07 01:06:38.604640 | orchestrator | 2026-04-07 01:06:38.604643 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-07 01:06:38.604646 | orchestrator | Tuesday 07 April 2026 01:04:42 +0000 (0:00:03.618) 0:01:13.271 ********* 2026-04-07 01:06:38.604649 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604653 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604656 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604659 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604662 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-07 01:06:38.604665 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604669 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604672 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604675 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604678 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604681 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604686 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 01:06:38.604689 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604692 | orchestrator | 2026-04-07 01:06:38.604695 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-07 01:06:38.604698 | orchestrator | Tuesday 07 April 2026 01:04:44 +0000 (0:00:01.916) 0:01:15.188 ********* 2026-04-07 01:06:38.604701 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:06:38.604707 | orchestrator | 2026-04-07 01:06:38.604710 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-07 01:06:38.604713 | orchestrator | Tuesday 07 April 2026 01:04:45 +0000 (0:00:00.795) 0:01:15.983 ********* 2026-04-07 01:06:38.604716 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.604719 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604722 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604725 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604728 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604731 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604734 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604738 | orchestrator | 2026-04-07 01:06:38.604741 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-07 01:06:38.604744 | orchestrator | Tuesday 07 April 2026 01:04:46 +0000 (0:00:00.860) 0:01:16.844 ********* 2026-04-07 01:06:38.604747 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.604750 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604753 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604756 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.604759 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604762 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.604765 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.604768 | orchestrator | 2026-04-07 01:06:38.604771 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-07 01:06:38.604774 | orchestrator | Tuesday 07 April 2026 01:04:48 +0000 (0:00:02.354) 0:01:19.198 ********* 2026-04-07 01:06:38.604778 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604781 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.604784 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604787 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604793 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604796 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604799 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604804 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604807 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604810 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604813 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604817 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 01:06:38.604820 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604823 | orchestrator | 2026-04-07 01:06:38.604826 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-07 01:06:38.604829 | orchestrator | Tuesday 07 April 2026 01:04:50 +0000 (0:00:01.992) 0:01:21.191 ********* 2026-04-07 01:06:38.604832 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604835 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604838 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604841 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604844 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604847 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604851 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-07 01:06:38.604856 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604859 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604862 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604865 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604868 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 01:06:38.604871 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604874 | orchestrator | 2026-04-07 01:06:38.604883 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-07 01:06:38.604886 | orchestrator | Tuesday 07 April 2026 01:04:52 +0000 (0:00:01.689) 0:01:22.881 ********* 2026-04-07 01:06:38.604893 | orchestrator | [WARNING]: Skipped 2026-04-07 01:06:38.604896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-07 01:06:38.604899 | orchestrator | due to this access issue: 2026-04-07 01:06:38.604902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-07 01:06:38.604905 | orchestrator | not a directory 2026-04-07 01:06:38.604909 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:06:38.604912 | orchestrator | 2026-04-07 01:06:38.604918 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-07 01:06:38.604922 | orchestrator | Tuesday 07 April 2026 01:04:53 +0000 (0:00:01.337) 0:01:24.218 ********* 2026-04-07 01:06:38.604926 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.604929 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604937 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604940 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604944 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604948 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604951 | orchestrator | 2026-04-07 01:06:38.604955 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-07 01:06:38.604959 | orchestrator | Tuesday 07 April 2026 01:04:54 +0000 (0:00:00.845) 0:01:25.064 ********* 2026-04-07 01:06:38.604962 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.604966 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:38.604969 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:38.604973 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:38.604977 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:06:38.604980 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:06:38.604984 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:06:38.604989 | orchestrator | 2026-04-07 01:06:38.604995 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-07 01:06:38.605000 | orchestrator | Tuesday 07 April 2026 01:04:55 +0000 (0:00:01.074) 0:01:26.138 ********* 2026-04-07 01:06:38.605006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605016 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 01:06:38.605026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605079 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 01:06:38.605099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605123 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 01:06:38.605129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605165 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 01:06:38.605190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 01:06:38.605203 | orchestrator | 2026-04-07 01:06:38.605209 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-07 01:06:38.605214 | orchestrator | Tuesday 07 April 2026 01:05:00 +0000 (0:00:04.822) 0:01:30.961 ********* 2026-04-07 01:06:38.605219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 01:06:38.605225 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:06:38.605230 | orchestrator | 2026-04-07 01:06:38.605235 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605241 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:01.135) 0:01:32.096 ********* 2026-04-07 01:06:38.605246 | orchestrator | 2026-04-07 01:06:38.605251 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605257 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.068) 0:01:32.164 ********* 2026-04-07 01:06:38.605267 | orchestrator | 2026-04-07 01:06:38.605272 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605278 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.064) 0:01:32.229 ********* 2026-04-07 01:06:38.605283 | orchestrator | 2026-04-07 01:06:38.605289 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605294 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.065) 0:01:32.294 ********* 2026-04-07 01:06:38.605300 | orchestrator | 2026-04-07 01:06:38.605305 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605310 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.063) 0:01:32.358 ********* 2026-04-07 01:06:38.605316 | orchestrator | 2026-04-07 01:06:38.605321 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605326 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.060) 0:01:32.419 ********* 2026-04-07 01:06:38.605331 | orchestrator | 2026-04-07 01:06:38.605336 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 01:06:38.605342 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.060) 0:01:32.479 ********* 2026-04-07 01:06:38.605347 | orchestrator | 2026-04-07 01:06:38.605352 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-07 01:06:38.605357 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:00.081) 0:01:32.561 ********* 2026-04-07 01:06:38.605362 | orchestrator | changed: [testbed-manager] 2026-04-07 01:06:38.605368 | orchestrator | 2026-04-07 01:06:38.605373 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-07 01:06:38.605380 | orchestrator | Tuesday 07 April 2026 01:05:15 +0000 (0:00:13.779) 0:01:46.340 ********* 2026-04-07 01:06:38.605386 | orchestrator | changed: [testbed-manager] 2026-04-07 01:06:38.605400 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:06:38.605406 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.605411 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:06:38.605416 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:06:38.605421 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.605427 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.605432 | orchestrator | 2026-04-07 01:06:38.605437 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-07 01:06:38.605442 | orchestrator | Tuesday 07 April 2026 01:05:28 +0000 (0:00:13.095) 0:01:59.435 ********* 2026-04-07 01:06:38.605448 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.605452 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.605457 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.605462 | orchestrator | 2026-04-07 01:06:38.605467 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-07 01:06:38.605472 | orchestrator | Tuesday 07 April 2026 01:05:38 +0000 (0:00:09.576) 0:02:09.012 ********* 2026-04-07 01:06:38.605477 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.605482 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.605487 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.605492 | orchestrator | 2026-04-07 01:06:38.605497 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-07 01:06:38.605503 | orchestrator | Tuesday 07 April 2026 01:05:47 +0000 (0:00:09.645) 0:02:18.658 ********* 2026-04-07 01:06:38.605508 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.605513 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.605518 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.605523 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:06:38.605529 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:06:38.605534 | orchestrator | changed: [testbed-manager] 2026-04-07 01:06:38.605539 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:06:38.605544 | orchestrator | 2026-04-07 01:06:38.605549 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-07 01:06:38.605554 | orchestrator | Tuesday 07 April 2026 01:06:02 +0000 (0:00:14.748) 0:02:33.407 ********* 2026-04-07 01:06:38.605566 | orchestrator | changed: [testbed-manager] 2026-04-07 01:06:38.605571 | orchestrator | 2026-04-07 01:06:38.605577 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-07 01:06:38.605582 | orchestrator | Tuesday 07 April 2026 01:06:13 +0000 (0:00:11.224) 0:02:44.631 ********* 2026-04-07 01:06:38.605587 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:38.605592 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:38.605597 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:38.605603 | orchestrator | 2026-04-07 01:06:38.605608 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-07 01:06:38.605613 | orchestrator | Tuesday 07 April 2026 01:06:24 +0000 (0:00:10.301) 0:02:54.932 ********* 2026-04-07 01:06:38.605618 | orchestrator | changed: [testbed-manager] 2026-04-07 01:06:38.605623 | orchestrator | 2026-04-07 01:06:38.605629 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-07 01:06:38.605634 | orchestrator | Tuesday 07 April 2026 01:06:29 +0000 (0:00:05.536) 0:03:00.469 ********* 2026-04-07 01:06:38.605642 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:06:38.605648 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:06:38.605653 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:06:38.605658 | orchestrator | 2026-04-07 01:06:38.605663 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:06:38.605669 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-07 01:06:38.605675 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 01:06:38.605680 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 01:06:38.605685 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 01:06:38.605691 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 01:06:38.605696 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 01:06:38.605701 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 01:06:38.605706 | orchestrator | 2026-04-07 01:06:38.605712 | orchestrator | 2026-04-07 01:06:38.605717 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:06:38.605774 | orchestrator | Tuesday 07 April 2026 01:06:36 +0000 (0:00:06.624) 0:03:07.093 ********* 2026-04-07 01:06:38.605780 | orchestrator | =============================================================================== 2026-04-07 01:06:38.605786 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.28s 2026-04-07 01:06:38.605791 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.95s 2026-04-07 01:06:38.605797 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.75s 2026-04-07 01:06:38.605802 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.78s 2026-04-07 01:06:38.605807 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.09s 2026-04-07 01:06:38.605817 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.22s 2026-04-07 01:06:38.605823 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.30s 2026-04-07 01:06:38.605828 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.65s 2026-04-07 01:06:38.605837 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.58s 2026-04-07 01:06:38.605843 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.62s 2026-04-07 01:06:38.605849 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.59s 2026-04-07 01:06:38.605854 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.99s 2026-04-07 01:06:38.605860 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.54s 2026-04-07 01:06:38.605864 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.82s 2026-04-07 01:06:38.605870 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.48s 2026-04-07 01:06:38.605874 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.62s 2026-04-07 01:06:38.605879 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.35s 2026-04-07 01:06:38.605884 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.99s 2026-04-07 01:06:38.605889 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.92s 2026-04-07 01:06:38.605894 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.77s 2026-04-07 01:06:38.605899 | orchestrator | 2026-04-07 01:06:38 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:38.605905 | orchestrator | 2026-04-07 01:06:38 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:38.605910 | orchestrator | 2026-04-07 01:06:38 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:38.607137 | orchestrator | 2026-04-07 01:06:38 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:38.607178 | orchestrator | 2026-04-07 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:41.650172 | orchestrator | 2026-04-07 01:06:41 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:41.652351 | orchestrator | 2026-04-07 01:06:41 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:41.654559 | orchestrator | 2026-04-07 01:06:41 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:41.658251 | orchestrator | 2026-04-07 01:06:41 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:41.658328 | orchestrator | 2026-04-07 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:44.694681 | orchestrator | 2026-04-07 01:06:44 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:44.695893 | orchestrator | 2026-04-07 01:06:44 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:44.697490 | orchestrator | 2026-04-07 01:06:44 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:44.698939 | orchestrator | 2026-04-07 01:06:44 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:44.699110 | orchestrator | 2026-04-07 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:47.738178 | orchestrator | 2026-04-07 01:06:47 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:47.739597 | orchestrator | 2026-04-07 01:06:47 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state STARTED 2026-04-07 01:06:47.741106 | orchestrator | 2026-04-07 01:06:47 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:47.742753 | orchestrator | 2026-04-07 01:06:47 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:47.742782 | orchestrator | 2026-04-07 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:50.787662 | orchestrator | 2026-04-07 01:06:50 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:50.791503 | orchestrator | 2026-04-07 01:06:50 | INFO  | Task 62c96791-8802-4071-95be-87d7d6ba5a0c is in state SUCCESS 2026-04-07 01:06:50.791791 | orchestrator | 2026-04-07 01:06:50.793541 | orchestrator | 2026-04-07 01:06:50.793574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:06:50.793582 | orchestrator | 2026-04-07 01:06:50.793590 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:06:50.793597 | orchestrator | Tuesday 07 April 2026 01:04:02 +0000 (0:00:00.516) 0:00:00.516 ********* 2026-04-07 01:06:50.793603 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:06:50.793610 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:06:50.793617 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:06:50.793624 | orchestrator | 2026-04-07 01:06:50.793630 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:06:50.793636 | orchestrator | Tuesday 07 April 2026 01:04:02 +0000 (0:00:00.420) 0:00:00.937 ********* 2026-04-07 01:06:50.793642 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-07 01:06:50.793648 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-07 01:06:50.793654 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-07 01:06:50.793660 | orchestrator | 2026-04-07 01:06:50.793665 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-07 01:06:50.793671 | orchestrator | 2026-04-07 01:06:50.793676 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 01:06:50.793682 | orchestrator | Tuesday 07 April 2026 01:04:02 +0000 (0:00:00.477) 0:00:01.414 ********* 2026-04-07 01:06:50.793687 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:50.793694 | orchestrator | 2026-04-07 01:06:50.793699 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-07 01:06:50.793705 | orchestrator | Tuesday 07 April 2026 01:04:04 +0000 (0:00:01.781) 0:00:03.196 ********* 2026-04-07 01:06:50.793711 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-07 01:06:50.793717 | orchestrator | 2026-04-07 01:06:50.793724 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-07 01:06:50.793730 | orchestrator | Tuesday 07 April 2026 01:04:08 +0000 (0:00:03.747) 0:00:06.944 ********* 2026-04-07 01:06:50.793737 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-07 01:06:50.793744 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-07 01:06:50.793750 | orchestrator | 2026-04-07 01:06:50.793757 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-07 01:06:50.793763 | orchestrator | Tuesday 07 April 2026 01:04:14 +0000 (0:00:05.643) 0:00:12.587 ********* 2026-04-07 01:06:50.793769 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:06:50.793776 | orchestrator | 2026-04-07 01:06:50.793783 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-07 01:06:50.793789 | orchestrator | Tuesday 07 April 2026 01:04:16 +0000 (0:00:02.825) 0:00:15.413 ********* 2026-04-07 01:06:50.793804 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-07 01:06:50.793811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:06:50.793824 | orchestrator | 2026-04-07 01:06:50.793830 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-07 01:06:50.793837 | orchestrator | Tuesday 07 April 2026 01:04:20 +0000 (0:00:03.361) 0:00:18.774 ********* 2026-04-07 01:06:50.793843 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:06:50.793849 | orchestrator | 2026-04-07 01:06:50.793856 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-07 01:06:50.794152 | orchestrator | Tuesday 07 April 2026 01:04:23 +0000 (0:00:02.820) 0:00:21.594 ********* 2026-04-07 01:06:50.794160 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-07 01:06:50.794164 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-07 01:06:50.794168 | orchestrator | 2026-04-07 01:06:50.794172 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-07 01:06:50.794176 | orchestrator | Tuesday 07 April 2026 01:04:29 +0000 (0:00:06.695) 0:00:28.291 ********* 2026-04-07 01:06:50.794182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794269 | orchestrator | 2026-04-07 01:06:50.794273 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 01:06:50.794277 | orchestrator | Tuesday 07 April 2026 01:04:32 +0000 (0:00:03.065) 0:00:31.356 ********* 2026-04-07 01:06:50.794281 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.794285 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.794289 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.794293 | orchestrator | 2026-04-07 01:06:50.794296 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 01:06:50.794300 | orchestrator | Tuesday 07 April 2026 01:04:33 +0000 (0:00:00.338) 0:00:31.695 ********* 2026-04-07 01:06:50.794304 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:50.794308 | orchestrator | 2026-04-07 01:06:50.794312 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-07 01:06:50.794318 | orchestrator | Tuesday 07 April 2026 01:04:33 +0000 (0:00:00.521) 0:00:32.217 ********* 2026-04-07 01:06:50.794322 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-07 01:06:50.794326 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-07 01:06:50.794330 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-07 01:06:50.794334 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-07 01:06:50.794337 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-07 01:06:50.794341 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-07 01:06:50.794345 | orchestrator | 2026-04-07 01:06:50.794349 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-07 01:06:50.794353 | orchestrator | Tuesday 07 April 2026 01:04:35 +0000 (0:00:02.160) 0:00:34.377 ********* 2026-04-07 01:06:50.794357 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794365 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794372 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794376 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794383 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794387 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 01:06:50.794427 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794435 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794439 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794447 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794452 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794469 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 01:06:50.794474 | orchestrator | 2026-04-07 01:06:50.794478 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-07 01:06:50.794482 | orchestrator | Tuesday 07 April 2026 01:04:39 +0000 (0:00:03.825) 0:00:38.202 ********* 2026-04-07 01:06:50.794486 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:50.794518 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:50.794522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 01:06:50.794526 | orchestrator | 2026-04-07 01:06:50.794530 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-07 01:06:50.794539 | orchestrator | Tuesday 07 April 2026 01:04:42 +0000 (0:00:02.303) 0:00:40.506 ********* 2026-04-07 01:06:50.794543 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-07 01:06:50.794553 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-07 01:06:50.794557 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-07 01:06:50.794561 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:06:50.794565 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:06:50.794568 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 01:06:50.794572 | orchestrator | 2026-04-07 01:06:50.794576 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-07 01:06:50.794579 | orchestrator | Tuesday 07 April 2026 01:04:45 +0000 (0:00:03.157) 0:00:43.664 ********* 2026-04-07 01:06:50.794583 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-07 01:06:50.794587 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-07 01:06:50.794591 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-07 01:06:50.794595 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-07 01:06:50.794598 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-07 01:06:50.794602 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-07 01:06:50.794606 | orchestrator | 2026-04-07 01:06:50.794609 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-07 01:06:50.794613 | orchestrator | Tuesday 07 April 2026 01:04:46 +0000 (0:00:01.112) 0:00:44.776 ********* 2026-04-07 01:06:50.794617 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.794621 | orchestrator | 2026-04-07 01:06:50.794624 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-07 01:06:50.794628 | orchestrator | Tuesday 07 April 2026 01:04:46 +0000 (0:00:00.167) 0:00:44.944 ********* 2026-04-07 01:06:50.794632 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.794639 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.794645 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.794652 | orchestrator | 2026-04-07 01:06:50.794658 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 01:06:50.794664 | orchestrator | Tuesday 07 April 2026 01:04:47 +0000 (0:00:00.699) 0:00:45.643 ********* 2026-04-07 01:06:50.794675 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:06:50.794682 | orchestrator | 2026-04-07 01:06:50.794782 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-07 01:06:50.794787 | orchestrator | Tuesday 07 April 2026 01:04:47 +0000 (0:00:00.680) 0:00:46.324 ********* 2026-04-07 01:06:50.794791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.794804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.794875 | orchestrator | 2026-04-07 01:06:50.794879 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-07 01:06:50.794883 | orchestrator | Tuesday 07 April 2026 01:04:52 +0000 (0:00:04.785) 0:00:51.109 ********* 2026-04-07 01:06:50.794887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.794891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794907 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.794914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.794918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794932 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.794936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.794945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794957 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.794961 | orchestrator | 2026-04-07 01:06:50.794965 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-07 01:06:50.794969 | orchestrator | Tuesday 07 April 2026 01:04:53 +0000 (0:00:00.998) 0:00:52.107 ********* 2026-04-07 01:06:50.794975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.794979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.794998 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.795002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.795006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795022 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.795028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.795033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795045 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.795049 | orchestrator | 2026-04-07 01:06:50.795052 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-07 01:06:50.795059 | orchestrator | Tuesday 07 April 2026 01:04:54 +0000 (0:00:00.984) 0:00:53.092 ********* 2026-04-07 01:06:50.795065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795129 | orchestrator | 2026-04-07 01:06:50.795133 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-07 01:06:50.795137 | orchestrator | Tuesday 07 April 2026 01:04:59 +0000 (0:00:04.979) 0:00:58.072 ********* 2026-04-07 01:06:50.795141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 01:06:50.795144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 01:06:50.795148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 01:06:50.795152 | orchestrator | 2026-04-07 01:06:50.795156 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-07 01:06:50.795160 | orchestrator | Tuesday 07 April 2026 01:05:01 +0000 (0:00:02.358) 0:01:00.430 ********* 2026-04-07 01:06:50.795166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795226 | orchestrator | 2026-04-07 01:06:50.795232 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-07 01:06:50.795236 | orchestrator | Tuesday 07 April 2026 01:05:17 +0000 (0:00:15.575) 0:01:16.006 ********* 2026-04-07 01:06:50.795240 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795243 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795247 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795251 | orchestrator | 2026-04-07 01:06:50.795255 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-07 01:06:50.795259 | orchestrator | Tuesday 07 April 2026 01:05:20 +0000 (0:00:03.105) 0:01:19.112 ********* 2026-04-07 01:06:50.795262 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795266 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795270 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795274 | orchestrator | 2026-04-07 01:06:50.795278 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-07 01:06:50.795281 | orchestrator | Tuesday 07 April 2026 01:05:22 +0000 (0:00:01.856) 0:01:20.968 ********* 2026-04-07 01:06:50.795285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.795292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795306 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.795313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.795317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795334 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.795338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 01:06:50.795342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 01:06:50.795359 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.795363 | orchestrator | 2026-04-07 01:06:50.795367 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-07 01:06:50.795370 | orchestrator | Tuesday 07 April 2026 01:05:23 +0000 (0:00:00.774) 0:01:21.742 ********* 2026-04-07 01:06:50.795374 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.795378 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.795382 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.795386 | orchestrator | 2026-04-07 01:06:50.795389 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-07 01:06:50.795425 | orchestrator | Tuesday 07 April 2026 01:05:23 +0000 (0:00:00.308) 0:01:22.050 ********* 2026-04-07 01:06:50.795434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 01:06:50.795455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 01:06:50.795506 | orchestrator | 2026-04-07 01:06:50.795511 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 01:06:50.795515 | orchestrator | Tuesday 07 April 2026 01:05:26 +0000 (0:00:02.634) 0:01:24.685 ********* 2026-04-07 01:06:50.795523 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.795528 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:06:50.795532 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:06:50.795537 | orchestrator | 2026-04-07 01:06:50.795541 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-07 01:06:50.795546 | orchestrator | Tuesday 07 April 2026 01:05:26 +0000 (0:00:00.277) 0:01:24.962 ********* 2026-04-07 01:06:50.795551 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795555 | orchestrator | 2026-04-07 01:06:50.795560 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-07 01:06:50.795565 | orchestrator | Tuesday 07 April 2026 01:05:28 +0000 (0:00:01.851) 0:01:26.814 ********* 2026-04-07 01:06:50.795569 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795574 | orchestrator | 2026-04-07 01:06:50.795579 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-07 01:06:50.795583 | orchestrator | Tuesday 07 April 2026 01:05:30 +0000 (0:00:01.819) 0:01:28.634 ********* 2026-04-07 01:06:50.795587 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795592 | orchestrator | 2026-04-07 01:06:50.795596 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 01:06:50.795600 | orchestrator | Tuesday 07 April 2026 01:05:47 +0000 (0:00:17.352) 0:01:45.987 ********* 2026-04-07 01:06:50.795604 | orchestrator | 2026-04-07 01:06:50.795611 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 01:06:50.795615 | orchestrator | Tuesday 07 April 2026 01:05:47 +0000 (0:00:00.076) 0:01:46.063 ********* 2026-04-07 01:06:50.795619 | orchestrator | 2026-04-07 01:06:50.795622 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 01:06:50.795626 | orchestrator | Tuesday 07 April 2026 01:05:47 +0000 (0:00:00.072) 0:01:46.136 ********* 2026-04-07 01:06:50.795630 | orchestrator | 2026-04-07 01:06:50.795636 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-07 01:06:50.795642 | orchestrator | Tuesday 07 April 2026 01:05:47 +0000 (0:00:00.062) 0:01:46.198 ********* 2026-04-07 01:06:50.795649 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795655 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795661 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795667 | orchestrator | 2026-04-07 01:06:50.795674 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-07 01:06:50.795683 | orchestrator | Tuesday 07 April 2026 01:06:17 +0000 (0:00:29.814) 0:02:16.013 ********* 2026-04-07 01:06:50.795690 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795696 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795702 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795709 | orchestrator | 2026-04-07 01:06:50.795715 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-07 01:06:50.795719 | orchestrator | Tuesday 07 April 2026 01:06:27 +0000 (0:00:09.652) 0:02:25.665 ********* 2026-04-07 01:06:50.795723 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795727 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795731 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795734 | orchestrator | 2026-04-07 01:06:50.795738 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-07 01:06:50.795742 | orchestrator | Tuesday 07 April 2026 01:06:44 +0000 (0:00:17.719) 0:02:43.384 ********* 2026-04-07 01:06:50.795746 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:06:50.795750 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:06:50.795753 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:06:50.795757 | orchestrator | 2026-04-07 01:06:50.795761 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-07 01:06:50.795764 | orchestrator | Tuesday 07 April 2026 01:06:50 +0000 (0:00:05.356) 0:02:48.741 ********* 2026-04-07 01:06:50.795768 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:06:50.795772 | orchestrator | 2026-04-07 01:06:50.795776 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:06:50.795780 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-07 01:06:50.795784 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:06:50.795788 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:06:50.795792 | orchestrator | 2026-04-07 01:06:50.795796 | orchestrator | 2026-04-07 01:06:50.795800 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:06:50.795804 | orchestrator | Tuesday 07 April 2026 01:06:50 +0000 (0:00:00.227) 0:02:48.969 ********* 2026-04-07 01:06:50.795807 | orchestrator | =============================================================================== 2026-04-07 01:06:50.795811 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.81s 2026-04-07 01:06:50.795815 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.72s 2026-04-07 01:06:50.795819 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.35s 2026-04-07 01:06:50.795822 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.58s 2026-04-07 01:06:50.795831 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.65s 2026-04-07 01:06:50.795835 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.70s 2026-04-07 01:06:50.795838 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.64s 2026-04-07 01:06:50.795842 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.36s 2026-04-07 01:06:50.795846 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.98s 2026-04-07 01:06:50.795849 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.78s 2026-04-07 01:06:50.795856 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.83s 2026-04-07 01:06:50.795860 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.75s 2026-04-07 01:06:50.795864 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.36s 2026-04-07 01:06:50.795868 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.16s 2026-04-07 01:06:50.795872 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.11s 2026-04-07 01:06:50.795875 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.06s 2026-04-07 01:06:50.795879 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.83s 2026-04-07 01:06:50.795883 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.82s 2026-04-07 01:06:50.795887 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.63s 2026-04-07 01:06:50.795890 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.36s 2026-04-07 01:06:50.795894 | orchestrator | 2026-04-07 01:06:50 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:50.796366 | orchestrator | 2026-04-07 01:06:50 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:50.796849 | orchestrator | 2026-04-07 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:53.843313 | orchestrator | 2026-04-07 01:06:53 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:06:53.843583 | orchestrator | 2026-04-07 01:06:53 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:53.844741 | orchestrator | 2026-04-07 01:06:53 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:53.845711 | orchestrator | 2026-04-07 01:06:53 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:53.845879 | orchestrator | 2026-04-07 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:56.887205 | orchestrator | 2026-04-07 01:06:56 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:06:56.888914 | orchestrator | 2026-04-07 01:06:56 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:56.891885 | orchestrator | 2026-04-07 01:06:56 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:56.891926 | orchestrator | 2026-04-07 01:06:56 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:56.891932 | orchestrator | 2026-04-07 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:06:59.993934 | orchestrator | 2026-04-07 01:06:59 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:06:59.994942 | orchestrator | 2026-04-07 01:06:59 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:06:59.996579 | orchestrator | 2026-04-07 01:06:59 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:06:59.997386 | orchestrator | 2026-04-07 01:06:59 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:06:59.997555 | orchestrator | 2026-04-07 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:03.035356 | orchestrator | 2026-04-07 01:07:03 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:03.038083 | orchestrator | 2026-04-07 01:07:03 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:03.041163 | orchestrator | 2026-04-07 01:07:03 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:03.044682 | orchestrator | 2026-04-07 01:07:03 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:03.044748 | orchestrator | 2026-04-07 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:06.081765 | orchestrator | 2026-04-07 01:07:06 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:06.083945 | orchestrator | 2026-04-07 01:07:06 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:06.087471 | orchestrator | 2026-04-07 01:07:06 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:06.089029 | orchestrator | 2026-04-07 01:07:06 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:06.089062 | orchestrator | 2026-04-07 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:09.210098 | orchestrator | 2026-04-07 01:07:09 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:09.210211 | orchestrator | 2026-04-07 01:07:09 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:09.210231 | orchestrator | 2026-04-07 01:07:09 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:09.210245 | orchestrator | 2026-04-07 01:07:09 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:09.210259 | orchestrator | 2026-04-07 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:12.219831 | orchestrator | 2026-04-07 01:07:12 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:12.222082 | orchestrator | 2026-04-07 01:07:12 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:12.223208 | orchestrator | 2026-04-07 01:07:12 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:12.226664 | orchestrator | 2026-04-07 01:07:12 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:12.226708 | orchestrator | 2026-04-07 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:15.271603 | orchestrator | 2026-04-07 01:07:15 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:15.271687 | orchestrator | 2026-04-07 01:07:15 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:15.272333 | orchestrator | 2026-04-07 01:07:15 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:15.273123 | orchestrator | 2026-04-07 01:07:15 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:15.273250 | orchestrator | 2026-04-07 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:18.311900 | orchestrator | 2026-04-07 01:07:18 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:18.312534 | orchestrator | 2026-04-07 01:07:18 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:18.313468 | orchestrator | 2026-04-07 01:07:18 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:18.315008 | orchestrator | 2026-04-07 01:07:18 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:18.315096 | orchestrator | 2026-04-07 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:21.353325 | orchestrator | 2026-04-07 01:07:21 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:21.353578 | orchestrator | 2026-04-07 01:07:21 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:21.354387 | orchestrator | 2026-04-07 01:07:21 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:21.355110 | orchestrator | 2026-04-07 01:07:21 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:21.355187 | orchestrator | 2026-04-07 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:24.389461 | orchestrator | 2026-04-07 01:07:24 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:24.390059 | orchestrator | 2026-04-07 01:07:24 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:24.391003 | orchestrator | 2026-04-07 01:07:24 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:24.391881 | orchestrator | 2026-04-07 01:07:24 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:24.391915 | orchestrator | 2026-04-07 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:27.415947 | orchestrator | 2026-04-07 01:07:27 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:27.417237 | orchestrator | 2026-04-07 01:07:27 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:27.418004 | orchestrator | 2026-04-07 01:07:27 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:27.418575 | orchestrator | 2026-04-07 01:07:27 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:27.418595 | orchestrator | 2026-04-07 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:30.512288 | orchestrator | 2026-04-07 01:07:30 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:30.512879 | orchestrator | 2026-04-07 01:07:30 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:30.514841 | orchestrator | 2026-04-07 01:07:30 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:30.515471 | orchestrator | 2026-04-07 01:07:30 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:30.515510 | orchestrator | 2026-04-07 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:33.537827 | orchestrator | 2026-04-07 01:07:33 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:33.538224 | orchestrator | 2026-04-07 01:07:33 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:33.538980 | orchestrator | 2026-04-07 01:07:33 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:33.539495 | orchestrator | 2026-04-07 01:07:33 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:33.539513 | orchestrator | 2026-04-07 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:36.561478 | orchestrator | 2026-04-07 01:07:36 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:36.562727 | orchestrator | 2026-04-07 01:07:36 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:36.563304 | orchestrator | 2026-04-07 01:07:36 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:36.564155 | orchestrator | 2026-04-07 01:07:36 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:36.564178 | orchestrator | 2026-04-07 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:39.586757 | orchestrator | 2026-04-07 01:07:39 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:39.587459 | orchestrator | 2026-04-07 01:07:39 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:39.588032 | orchestrator | 2026-04-07 01:07:39 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:39.589532 | orchestrator | 2026-04-07 01:07:39 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:39.589569 | orchestrator | 2026-04-07 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:42.617492 | orchestrator | 2026-04-07 01:07:42 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:42.617605 | orchestrator | 2026-04-07 01:07:42 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:42.618123 | orchestrator | 2026-04-07 01:07:42 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:42.621205 | orchestrator | 2026-04-07 01:07:42 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:42.621280 | orchestrator | 2026-04-07 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:45.648871 | orchestrator | 2026-04-07 01:07:45 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:45.648957 | orchestrator | 2026-04-07 01:07:45 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:45.650468 | orchestrator | 2026-04-07 01:07:45 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:45.651009 | orchestrator | 2026-04-07 01:07:45 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:45.651042 | orchestrator | 2026-04-07 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:48.670711 | orchestrator | 2026-04-07 01:07:48 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:48.671997 | orchestrator | 2026-04-07 01:07:48 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:48.672537 | orchestrator | 2026-04-07 01:07:48 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:48.673456 | orchestrator | 2026-04-07 01:07:48 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:48.673495 | orchestrator | 2026-04-07 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:51.693894 | orchestrator | 2026-04-07 01:07:51 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:51.694189 | orchestrator | 2026-04-07 01:07:51 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:51.694859 | orchestrator | 2026-04-07 01:07:51 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:51.696598 | orchestrator | 2026-04-07 01:07:51 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:51.696649 | orchestrator | 2026-04-07 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:54.719000 | orchestrator | 2026-04-07 01:07:54 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:54.719273 | orchestrator | 2026-04-07 01:07:54 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:54.719854 | orchestrator | 2026-04-07 01:07:54 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:54.720330 | orchestrator | 2026-04-07 01:07:54 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:54.720355 | orchestrator | 2026-04-07 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:07:57.741855 | orchestrator | 2026-04-07 01:07:57 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:07:57.742611 | orchestrator | 2026-04-07 01:07:57 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:07:57.743131 | orchestrator | 2026-04-07 01:07:57 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:07:57.743643 | orchestrator | 2026-04-07 01:07:57 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:07:57.743674 | orchestrator | 2026-04-07 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:00.771588 | orchestrator | 2026-04-07 01:08:00 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:00.771690 | orchestrator | 2026-04-07 01:08:00 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:00.772366 | orchestrator | 2026-04-07 01:08:00 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:00.772960 | orchestrator | 2026-04-07 01:08:00 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:00.772995 | orchestrator | 2026-04-07 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:03.805876 | orchestrator | 2026-04-07 01:08:03 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:03.806886 | orchestrator | 2026-04-07 01:08:03 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:03.807545 | orchestrator | 2026-04-07 01:08:03 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:03.809343 | orchestrator | 2026-04-07 01:08:03 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:03.809415 | orchestrator | 2026-04-07 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:06.833091 | orchestrator | 2026-04-07 01:08:06 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:06.834597 | orchestrator | 2026-04-07 01:08:06 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:06.835093 | orchestrator | 2026-04-07 01:08:06 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:06.835872 | orchestrator | 2026-04-07 01:08:06 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:06.835915 | orchestrator | 2026-04-07 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:09.860665 | orchestrator | 2026-04-07 01:08:09 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:09.860978 | orchestrator | 2026-04-07 01:08:09 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:09.861597 | orchestrator | 2026-04-07 01:08:09 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:09.862231 | orchestrator | 2026-04-07 01:08:09 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:09.862250 | orchestrator | 2026-04-07 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:12.887789 | orchestrator | 2026-04-07 01:08:12 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:12.888826 | orchestrator | 2026-04-07 01:08:12 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:12.891444 | orchestrator | 2026-04-07 01:08:12 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:12.893169 | orchestrator | 2026-04-07 01:08:12 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:12.893329 | orchestrator | 2026-04-07 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:15.922187 | orchestrator | 2026-04-07 01:08:15 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:15.922240 | orchestrator | 2026-04-07 01:08:15 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:15.922833 | orchestrator | 2026-04-07 01:08:15 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:15.924551 | orchestrator | 2026-04-07 01:08:15 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:15.924575 | orchestrator | 2026-04-07 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:19.015453 | orchestrator | 2026-04-07 01:08:19 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:19.015573 | orchestrator | 2026-04-07 01:08:19 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:19.015587 | orchestrator | 2026-04-07 01:08:19 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:19.016154 | orchestrator | 2026-04-07 01:08:19 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:19.016186 | orchestrator | 2026-04-07 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:22.040184 | orchestrator | 2026-04-07 01:08:22 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:22.041036 | orchestrator | 2026-04-07 01:08:22 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:22.041698 | orchestrator | 2026-04-07 01:08:22 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:22.043629 | orchestrator | 2026-04-07 01:08:22 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:22.043845 | orchestrator | 2026-04-07 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:25.070325 | orchestrator | 2026-04-07 01:08:25 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:25.073910 | orchestrator | 2026-04-07 01:08:25 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:25.073960 | orchestrator | 2026-04-07 01:08:25 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:25.073965 | orchestrator | 2026-04-07 01:08:25 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:25.073970 | orchestrator | 2026-04-07 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:28.101476 | orchestrator | 2026-04-07 01:08:28 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:28.101642 | orchestrator | 2026-04-07 01:08:28 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:28.103279 | orchestrator | 2026-04-07 01:08:28 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:28.103961 | orchestrator | 2026-04-07 01:08:28 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:28.104249 | orchestrator | 2026-04-07 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:31.158790 | orchestrator | 2026-04-07 01:08:31 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:31.159679 | orchestrator | 2026-04-07 01:08:31 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state STARTED 2026-04-07 01:08:31.160501 | orchestrator | 2026-04-07 01:08:31 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:31.161354 | orchestrator | 2026-04-07 01:08:31 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:31.161532 | orchestrator | 2026-04-07 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:34.185373 | orchestrator | 2026-04-07 01:08:34 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:34.185933 | orchestrator | 2026-04-07 01:08:34 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:34.187321 | orchestrator | 2026-04-07 01:08:34 | INFO  | Task 69087c90-88ad-4832-afcc-98f13573e807 is in state SUCCESS 2026-04-07 01:08:34.188495 | orchestrator | 2026-04-07 01:08:34.188527 | orchestrator | 2026-04-07 01:08:34.188538 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:08:34.188548 | orchestrator | 2026-04-07 01:08:34.188554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:08:34.188561 | orchestrator | Tuesday 07 April 2026 01:06:39 +0000 (0:00:00.276) 0:00:00.277 ********* 2026-04-07 01:08:34.188567 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:08:34.188576 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:08:34.188582 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:08:34.188590 | orchestrator | 2026-04-07 01:08:34.188598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:08:34.188605 | orchestrator | Tuesday 07 April 2026 01:06:39 +0000 (0:00:00.257) 0:00:00.534 ********* 2026-04-07 01:08:34.188613 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-07 01:08:34.188618 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-07 01:08:34.188622 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-07 01:08:34.188626 | orchestrator | 2026-04-07 01:08:34.188630 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-07 01:08:34.188634 | orchestrator | 2026-04-07 01:08:34.188638 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 01:08:34.188642 | orchestrator | Tuesday 07 April 2026 01:06:40 +0000 (0:00:00.272) 0:00:00.807 ********* 2026-04-07 01:08:34.188646 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:08:34.188651 | orchestrator | 2026-04-07 01:08:34.188655 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-07 01:08:34.188659 | orchestrator | Tuesday 07 April 2026 01:06:40 +0000 (0:00:00.539) 0:00:01.346 ********* 2026-04-07 01:08:34.188664 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-07 01:08:34.188668 | orchestrator | 2026-04-07 01:08:34.188671 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-07 01:08:34.188675 | orchestrator | Tuesday 07 April 2026 01:06:43 +0000 (0:00:02.816) 0:00:04.163 ********* 2026-04-07 01:08:34.188696 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-07 01:08:34.188701 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-07 01:08:34.188705 | orchestrator | 2026-04-07 01:08:34.188709 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-07 01:08:34.188712 | orchestrator | Tuesday 07 April 2026 01:06:48 +0000 (0:00:05.447) 0:00:09.610 ********* 2026-04-07 01:08:34.188716 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:08:34.188720 | orchestrator | 2026-04-07 01:08:34.188724 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-07 01:08:34.188728 | orchestrator | Tuesday 07 April 2026 01:06:51 +0000 (0:00:02.739) 0:00:12.349 ********* 2026-04-07 01:08:34.188732 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-07 01:08:34.188736 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:08:34.188740 | orchestrator | 2026-04-07 01:08:34.188743 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-07 01:08:34.188747 | orchestrator | Tuesday 07 April 2026 01:06:55 +0000 (0:00:03.394) 0:00:15.743 ********* 2026-04-07 01:08:34.188751 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:08:34.188755 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-07 01:08:34.188759 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-07 01:08:34.188763 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-07 01:08:34.188766 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-07 01:08:34.188770 | orchestrator | 2026-04-07 01:08:34.188774 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-07 01:08:34.188778 | orchestrator | Tuesday 07 April 2026 01:07:10 +0000 (0:00:15.554) 0:00:31.298 ********* 2026-04-07 01:08:34.188782 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-07 01:08:34.188785 | orchestrator | 2026-04-07 01:08:34.188789 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-07 01:08:34.188793 | orchestrator | Tuesday 07 April 2026 01:07:14 +0000 (0:00:04.190) 0:00:35.488 ********* 2026-04-07 01:08:34.188799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.188823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.188837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.188857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.188873 | orchestrator | 2026-04-07 01:08:34.188877 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-07 01:08:34.189013 | orchestrator | Tuesday 07 April 2026 01:07:17 +0000 (0:00:03.141) 0:00:38.630 ********* 2026-04-07 01:08:34.189019 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-07 01:08:34.189022 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-07 01:08:34.189026 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-07 01:08:34.189030 | orchestrator | 2026-04-07 01:08:34.189034 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-07 01:08:34.189038 | orchestrator | Tuesday 07 April 2026 01:07:20 +0000 (0:00:02.399) 0:00:41.029 ********* 2026-04-07 01:08:34.189042 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189046 | orchestrator | 2026-04-07 01:08:34.189050 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-07 01:08:34.189054 | orchestrator | Tuesday 07 April 2026 01:07:20 +0000 (0:00:00.278) 0:00:41.308 ********* 2026-04-07 01:08:34.189058 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189061 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189065 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189069 | orchestrator | 2026-04-07 01:08:34.189073 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 01:08:34.189077 | orchestrator | Tuesday 07 April 2026 01:07:21 +0000 (0:00:00.533) 0:00:41.842 ********* 2026-04-07 01:08:34.189081 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:08:34.189085 | orchestrator | 2026-04-07 01:08:34.189088 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-07 01:08:34.189093 | orchestrator | Tuesday 07 April 2026 01:07:22 +0000 (0:00:01.291) 0:00:43.133 ********* 2026-04-07 01:08:34.189097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189158 | orchestrator | 2026-04-07 01:08:34.189162 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-07 01:08:34.189166 | orchestrator | Tuesday 07 April 2026 01:07:26 +0000 (0:00:04.477) 0:00:47.610 ********* 2026-04-07 01:08:34.189170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189183 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189207 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189232 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189241 | orchestrator | 2026-04-07 01:08:34.189248 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-07 01:08:34.189254 | orchestrator | Tuesday 07 April 2026 01:07:28 +0000 (0:00:01.143) 0:00:48.754 ********* 2026-04-07 01:08:34.189268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189287 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189392 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189428 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189435 | orchestrator | 2026-04-07 01:08:34.189441 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-07 01:08:34.189447 | orchestrator | Tuesday 07 April 2026 01:07:29 +0000 (0:00:01.708) 0:00:50.463 ********* 2026-04-07 01:08:34.189454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189589 | orchestrator | 2026-04-07 01:08:34.189596 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-07 01:08:34.189603 | orchestrator | Tuesday 07 April 2026 01:07:34 +0000 (0:00:04.798) 0:00:55.262 ********* 2026-04-07 01:08:34.189611 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.189618 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:08:34.189625 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:08:34.189629 | orchestrator | 2026-04-07 01:08:34.189633 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-07 01:08:34.189637 | orchestrator | Tuesday 07 April 2026 01:07:36 +0000 (0:00:01.677) 0:00:56.940 ********* 2026-04-07 01:08:34.189641 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:08:34.189645 | orchestrator | 2026-04-07 01:08:34.189648 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-07 01:08:34.189652 | orchestrator | Tuesday 07 April 2026 01:07:38 +0000 (0:00:01.986) 0:00:58.926 ********* 2026-04-07 01:08:34.189656 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189660 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189663 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189667 | orchestrator | 2026-04-07 01:08:34.189671 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-07 01:08:34.189675 | orchestrator | Tuesday 07 April 2026 01:07:38 +0000 (0:00:00.763) 0:00:59.689 ********* 2026-04-07 01:08:34.189679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189746 | orchestrator | 2026-04-07 01:08:34.189752 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-07 01:08:34.189760 | orchestrator | Tuesday 07 April 2026 01:07:48 +0000 (0:00:09.965) 0:01:09.654 ********* 2026-04-07 01:08:34.189769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189822 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 01:08:34.189840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:08:34.189855 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189862 | orchestrator | 2026-04-07 01:08:34.189868 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-07 01:08:34.189872 | orchestrator | Tuesday 07 April 2026 01:07:49 +0000 (0:00:00.835) 0:01:10.490 ********* 2026-04-07 01:08:34.189876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 01:08:34.189909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:08:34.189957 | orchestrator | 2026-04-07 01:08:34.189964 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 01:08:34.189970 | orchestrator | Tuesday 07 April 2026 01:07:52 +0000 (0:00:02.496) 0:01:12.987 ********* 2026-04-07 01:08:34.189976 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:08:34.189982 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:08:34.189988 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:08:34.189994 | orchestrator | 2026-04-07 01:08:34.190001 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-07 01:08:34.190007 | orchestrator | Tuesday 07 April 2026 01:07:52 +0000 (0:00:00.252) 0:01:13.239 ********* 2026-04-07 01:08:34.190049 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190057 | orchestrator | 2026-04-07 01:08:34.190062 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-07 01:08:34.190068 | orchestrator | Tuesday 07 April 2026 01:07:54 +0000 (0:00:01.733) 0:01:14.973 ********* 2026-04-07 01:08:34.190074 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190081 | orchestrator | 2026-04-07 01:08:34.190089 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-07 01:08:34.190097 | orchestrator | Tuesday 07 April 2026 01:07:56 +0000 (0:00:01.867) 0:01:16.840 ********* 2026-04-07 01:08:34.190104 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190109 | orchestrator | 2026-04-07 01:08:34.190113 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 01:08:34.190116 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:10.288) 0:01:27.129 ********* 2026-04-07 01:08:34.190120 | orchestrator | 2026-04-07 01:08:34.190124 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 01:08:34.190128 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:00.209) 0:01:27.338 ********* 2026-04-07 01:08:34.190132 | orchestrator | 2026-04-07 01:08:34.190135 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 01:08:34.190139 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:00.099) 0:01:27.437 ********* 2026-04-07 01:08:34.190143 | orchestrator | 2026-04-07 01:08:34.190147 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-07 01:08:34.190151 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:00.051) 0:01:27.489 ********* 2026-04-07 01:08:34.190156 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:08:34.190160 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:08:34.190164 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190169 | orchestrator | 2026-04-07 01:08:34.190173 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-07 01:08:34.190178 | orchestrator | Tuesday 07 April 2026 01:08:16 +0000 (0:00:09.693) 0:01:37.183 ********* 2026-04-07 01:08:34.190183 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190187 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:08:34.190192 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:08:34.190196 | orchestrator | 2026-04-07 01:08:34.190201 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-07 01:08:34.190205 | orchestrator | Tuesday 07 April 2026 01:08:22 +0000 (0:00:06.030) 0:01:43.214 ********* 2026-04-07 01:08:34.190210 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:08:34.190215 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:08:34.190219 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:08:34.190224 | orchestrator | 2026-04-07 01:08:34.190228 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:08:34.190234 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:08:34.190241 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:08:34.190250 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:08:34.190254 | orchestrator | 2026-04-07 01:08:34.190258 | orchestrator | 2026-04-07 01:08:34.190262 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:08:34.190266 | orchestrator | Tuesday 07 April 2026 01:08:31 +0000 (0:00:09.501) 0:01:52.716 ********* 2026-04-07 01:08:34.190269 | orchestrator | =============================================================================== 2026-04-07 01:08:34.190273 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.55s 2026-04-07 01:08:34.190285 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.29s 2026-04-07 01:08:34.190289 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.97s 2026-04-07 01:08:34.190293 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.70s 2026-04-07 01:08:34.190297 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.50s 2026-04-07 01:08:34.190301 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.03s 2026-04-07 01:08:34.190304 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.45s 2026-04-07 01:08:34.190308 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.80s 2026-04-07 01:08:34.190312 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.48s 2026-04-07 01:08:34.190316 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.19s 2026-04-07 01:08:34.190319 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.39s 2026-04-07 01:08:34.190323 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.14s 2026-04-07 01:08:34.190327 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 2.82s 2026-04-07 01:08:34.190390 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.74s 2026-04-07 01:08:34.190397 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.50s 2026-04-07 01:08:34.190403 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.40s 2026-04-07 01:08:34.190409 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.99s 2026-04-07 01:08:34.190415 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.87s 2026-04-07 01:08:34.190421 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.73s 2026-04-07 01:08:34.190427 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.71s 2026-04-07 01:08:34.190433 | orchestrator | 2026-04-07 01:08:34 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:34.190439 | orchestrator | 2026-04-07 01:08:34 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:34.190446 | orchestrator | 2026-04-07 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:37.212231 | orchestrator | 2026-04-07 01:08:37 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:37.212551 | orchestrator | 2026-04-07 01:08:37 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:37.213381 | orchestrator | 2026-04-07 01:08:37 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:37.214161 | orchestrator | 2026-04-07 01:08:37 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:37.214186 | orchestrator | 2026-04-07 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:40.237082 | orchestrator | 2026-04-07 01:08:40 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:40.238225 | orchestrator | 2026-04-07 01:08:40 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:40.239496 | orchestrator | 2026-04-07 01:08:40 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:40.241864 | orchestrator | 2026-04-07 01:08:40 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:40.241899 | orchestrator | 2026-04-07 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:43.284287 | orchestrator | 2026-04-07 01:08:43 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:43.286421 | orchestrator | 2026-04-07 01:08:43 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:43.288702 | orchestrator | 2026-04-07 01:08:43 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:43.289672 | orchestrator | 2026-04-07 01:08:43 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:43.289948 | orchestrator | 2026-04-07 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:46.326153 | orchestrator | 2026-04-07 01:08:46 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:46.327639 | orchestrator | 2026-04-07 01:08:46 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:46.328694 | orchestrator | 2026-04-07 01:08:46 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:46.330694 | orchestrator | 2026-04-07 01:08:46 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:46.330764 | orchestrator | 2026-04-07 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:49.360564 | orchestrator | 2026-04-07 01:08:49 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:49.362107 | orchestrator | 2026-04-07 01:08:49 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:49.362990 | orchestrator | 2026-04-07 01:08:49 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:49.364058 | orchestrator | 2026-04-07 01:08:49 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:49.364080 | orchestrator | 2026-04-07 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:52.396326 | orchestrator | 2026-04-07 01:08:52 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:52.397383 | orchestrator | 2026-04-07 01:08:52 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:52.402077 | orchestrator | 2026-04-07 01:08:52 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:52.402204 | orchestrator | 2026-04-07 01:08:52 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:52.402213 | orchestrator | 2026-04-07 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:55.450229 | orchestrator | 2026-04-07 01:08:55 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:55.453532 | orchestrator | 2026-04-07 01:08:55 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:55.457576 | orchestrator | 2026-04-07 01:08:55 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:55.461320 | orchestrator | 2026-04-07 01:08:55 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:55.461435 | orchestrator | 2026-04-07 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:08:58.495930 | orchestrator | 2026-04-07 01:08:58 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:08:58.497626 | orchestrator | 2026-04-07 01:08:58 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:08:58.498630 | orchestrator | 2026-04-07 01:08:58 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:08:58.499319 | orchestrator | 2026-04-07 01:08:58 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:08:58.499428 | orchestrator | 2026-04-07 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:01.574509 | orchestrator | 2026-04-07 01:09:01 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:01.576070 | orchestrator | 2026-04-07 01:09:01 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:01.577868 | orchestrator | 2026-04-07 01:09:01 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:01.580639 | orchestrator | 2026-04-07 01:09:01 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:01.580715 | orchestrator | 2026-04-07 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:04.621086 | orchestrator | 2026-04-07 01:09:04 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:04.621484 | orchestrator | 2026-04-07 01:09:04 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:04.623700 | orchestrator | 2026-04-07 01:09:04 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:04.625005 | orchestrator | 2026-04-07 01:09:04 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:04.625087 | orchestrator | 2026-04-07 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:07.666405 | orchestrator | 2026-04-07 01:09:07 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:07.666976 | orchestrator | 2026-04-07 01:09:07 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:07.667960 | orchestrator | 2026-04-07 01:09:07 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:07.668763 | orchestrator | 2026-04-07 01:09:07 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:07.668793 | orchestrator | 2026-04-07 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:10.701914 | orchestrator | 2026-04-07 01:09:10 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:10.704697 | orchestrator | 2026-04-07 01:09:10 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:10.707909 | orchestrator | 2026-04-07 01:09:10 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:10.710230 | orchestrator | 2026-04-07 01:09:10 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:10.710291 | orchestrator | 2026-04-07 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:13.757883 | orchestrator | 2026-04-07 01:09:13 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:13.757966 | orchestrator | 2026-04-07 01:09:13 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:13.759457 | orchestrator | 2026-04-07 01:09:13 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:13.759580 | orchestrator | 2026-04-07 01:09:13 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:13.759699 | orchestrator | 2026-04-07 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:16.789036 | orchestrator | 2026-04-07 01:09:16 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:16.790710 | orchestrator | 2026-04-07 01:09:16 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:16.793815 | orchestrator | 2026-04-07 01:09:16 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:16.794104 | orchestrator | 2026-04-07 01:09:16 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:16.794118 | orchestrator | 2026-04-07 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:19.831503 | orchestrator | 2026-04-07 01:09:19 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:19.832795 | orchestrator | 2026-04-07 01:09:19 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:19.833209 | orchestrator | 2026-04-07 01:09:19 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:19.834436 | orchestrator | 2026-04-07 01:09:19 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:19.834455 | orchestrator | 2026-04-07 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:22.877643 | orchestrator | 2026-04-07 01:09:22 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:22.877899 | orchestrator | 2026-04-07 01:09:22 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:22.878764 | orchestrator | 2026-04-07 01:09:22 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:22.879705 | orchestrator | 2026-04-07 01:09:22 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:22.879725 | orchestrator | 2026-04-07 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:25.923761 | orchestrator | 2026-04-07 01:09:25 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:25.924092 | orchestrator | 2026-04-07 01:09:25 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:25.924988 | orchestrator | 2026-04-07 01:09:25 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:25.925702 | orchestrator | 2026-04-07 01:09:25 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:25.925745 | orchestrator | 2026-04-07 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:28.962991 | orchestrator | 2026-04-07 01:09:28 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:28.964483 | orchestrator | 2026-04-07 01:09:28 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:28.965417 | orchestrator | 2026-04-07 01:09:28 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:28.967602 | orchestrator | 2026-04-07 01:09:28 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:28.967649 | orchestrator | 2026-04-07 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:32.002417 | orchestrator | 2026-04-07 01:09:32 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:32.002931 | orchestrator | 2026-04-07 01:09:32 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:32.005252 | orchestrator | 2026-04-07 01:09:32 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:32.005481 | orchestrator | 2026-04-07 01:09:32 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:32.005565 | orchestrator | 2026-04-07 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:35.079516 | orchestrator | 2026-04-07 01:09:35 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:35.079568 | orchestrator | 2026-04-07 01:09:35 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:35.079573 | orchestrator | 2026-04-07 01:09:35 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:35.079576 | orchestrator | 2026-04-07 01:09:35 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:35.079580 | orchestrator | 2026-04-07 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:38.101994 | orchestrator | 2026-04-07 01:09:38 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:38.102402 | orchestrator | 2026-04-07 01:09:38 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:38.103265 | orchestrator | 2026-04-07 01:09:38 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:38.104127 | orchestrator | 2026-04-07 01:09:38 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:38.104154 | orchestrator | 2026-04-07 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:41.126160 | orchestrator | 2026-04-07 01:09:41 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:41.126501 | orchestrator | 2026-04-07 01:09:41 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:41.127218 | orchestrator | 2026-04-07 01:09:41 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:41.128028 | orchestrator | 2026-04-07 01:09:41 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:41.128096 | orchestrator | 2026-04-07 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:44.148121 | orchestrator | 2026-04-07 01:09:44 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:44.148479 | orchestrator | 2026-04-07 01:09:44 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:44.149263 | orchestrator | 2026-04-07 01:09:44 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:44.150132 | orchestrator | 2026-04-07 01:09:44 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:44.150169 | orchestrator | 2026-04-07 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:47.190503 | orchestrator | 2026-04-07 01:09:47 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:47.190752 | orchestrator | 2026-04-07 01:09:47 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state STARTED 2026-04-07 01:09:47.192707 | orchestrator | 2026-04-07 01:09:47 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:47.194068 | orchestrator | 2026-04-07 01:09:47 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:47.194148 | orchestrator | 2026-04-07 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:50.228973 | orchestrator | 2026-04-07 01:09:50 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:50.232457 | orchestrator | 2026-04-07 01:09:50 | INFO  | Task 8085d2e9-08fe-4eac-bbbb-7073836c6f75 is in state SUCCESS 2026-04-07 01:09:50.236051 | orchestrator | 2026-04-07 01:09:50.236157 | orchestrator | 2026-04-07 01:09:50.236168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:09:50.236176 | orchestrator | 2026-04-07 01:09:50.236182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:09:50.236190 | orchestrator | Tuesday 07 April 2026 01:06:53 +0000 (0:00:00.294) 0:00:00.295 ********* 2026-04-07 01:09:50.236196 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:09:50.236205 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:09:50.236253 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:09:50.236260 | orchestrator | 2026-04-07 01:09:50.236266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:09:50.236307 | orchestrator | Tuesday 07 April 2026 01:06:53 +0000 (0:00:00.286) 0:00:00.581 ********* 2026-04-07 01:09:50.236315 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-07 01:09:50.236330 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-07 01:09:50.236367 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-07 01:09:50.236373 | orchestrator | 2026-04-07 01:09:50.236379 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-07 01:09:50.236386 | orchestrator | 2026-04-07 01:09:50.236392 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 01:09:50.236447 | orchestrator | Tuesday 07 April 2026 01:06:54 +0000 (0:00:00.259) 0:00:00.841 ********* 2026-04-07 01:09:50.236454 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:09:50.236461 | orchestrator | 2026-04-07 01:09:50.236468 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-07 01:09:50.236474 | orchestrator | Tuesday 07 April 2026 01:06:54 +0000 (0:00:00.575) 0:00:01.417 ********* 2026-04-07 01:09:50.236480 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-07 01:09:50.236486 | orchestrator | 2026-04-07 01:09:50.236492 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-07 01:09:50.236497 | orchestrator | Tuesday 07 April 2026 01:06:58 +0000 (0:00:03.747) 0:00:05.165 ********* 2026-04-07 01:09:50.236504 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-07 01:09:50.236511 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-07 01:09:50.236517 | orchestrator | 2026-04-07 01:09:50.236523 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-07 01:09:50.236529 | orchestrator | Tuesday 07 April 2026 01:07:04 +0000 (0:00:06.245) 0:00:11.410 ********* 2026-04-07 01:09:50.236536 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:09:50.236542 | orchestrator | 2026-04-07 01:09:50.236548 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-07 01:09:50.236554 | orchestrator | Tuesday 07 April 2026 01:07:07 +0000 (0:00:03.084) 0:00:14.495 ********* 2026-04-07 01:09:50.236560 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-07 01:09:50.236567 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:09:50.236573 | orchestrator | 2026-04-07 01:09:50.236579 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-07 01:09:50.236585 | orchestrator | Tuesday 07 April 2026 01:07:12 +0000 (0:00:04.366) 0:00:18.862 ********* 2026-04-07 01:09:50.236592 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:09:50.236598 | orchestrator | 2026-04-07 01:09:50.236628 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-07 01:09:50.236634 | orchestrator | Tuesday 07 April 2026 01:07:15 +0000 (0:00:03.466) 0:00:22.329 ********* 2026-04-07 01:09:50.236641 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-07 01:09:50.236647 | orchestrator | 2026-04-07 01:09:50.236653 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-07 01:09:50.236661 | orchestrator | Tuesday 07 April 2026 01:07:19 +0000 (0:00:04.053) 0:00:26.382 ********* 2026-04-07 01:09:50.236671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.236700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.236729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.236750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.236910 | orchestrator | 2026-04-07 01:09:50.236916 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-07 01:09:50.236922 | orchestrator | Tuesday 07 April 2026 01:07:24 +0000 (0:00:04.703) 0:00:31.085 ********* 2026-04-07 01:09:50.236928 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.236934 | orchestrator | 2026-04-07 01:09:50.236941 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-07 01:09:50.236947 | orchestrator | Tuesday 07 April 2026 01:07:24 +0000 (0:00:00.110) 0:00:31.196 ********* 2026-04-07 01:09:50.236953 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.236959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.236966 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.236972 | orchestrator | 2026-04-07 01:09:50.236978 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 01:09:50.236984 | orchestrator | Tuesday 07 April 2026 01:07:24 +0000 (0:00:00.349) 0:00:31.545 ********* 2026-04-07 01:09:50.236990 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:09:50.236996 | orchestrator | 2026-04-07 01:09:50.237019 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-07 01:09:50.237025 | orchestrator | Tuesday 07 April 2026 01:07:26 +0000 (0:00:01.124) 0:00:32.669 ********* 2026-04-07 01:09:50.237031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.237047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.237055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.237067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.237186 | orchestrator | 2026-04-07 01:09:50.237193 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-07 01:09:50.237199 | orchestrator | Tuesday 07 April 2026 01:07:33 +0000 (0:00:07.897) 0:00:40.567 ********* 2026-04-07 01:09:50.237206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.237212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.237230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237263 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.237269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.237276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.237831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237904 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.237911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.237919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.237946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.237981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.237987 | orchestrator | 2026-04-07 01:09:50.237993 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-07 01:09:50.238000 | orchestrator | Tuesday 07 April 2026 01:07:34 +0000 (0:00:00.944) 0:00:41.512 ********* 2026-04-07 01:09:50.238006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.238081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238118 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.238125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.238152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238189 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.238195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.238207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238255 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.238261 | orchestrator | 2026-04-07 01:09:50.238266 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-07 01:09:50.238272 | orchestrator | Tuesday 07 April 2026 01:07:36 +0000 (0:00:01.134) 0:00:42.647 ********* 2026-04-07 01:09:50.238278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238528 | orchestrator | 2026-04-07 01:09:50.238535 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-07 01:09:50.238541 | orchestrator | Tuesday 07 April 2026 01:07:44 +0000 (0:00:08.299) 0:00:50.946 ********* 2026-04-07 01:09:50.238548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.238580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238721 | orchestrator | 2026-04-07 01:09:50.238727 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-07 01:09:50.238734 | orchestrator | Tuesday 07 April 2026 01:08:02 +0000 (0:00:17.803) 0:01:08.749 ********* 2026-04-07 01:09:50.238740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 01:09:50.238746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 01:09:50.238752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 01:09:50.238758 | orchestrator | 2026-04-07 01:09:50.238764 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-07 01:09:50.238771 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:04.381) 0:01:13.130 ********* 2026-04-07 01:09:50.238777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 01:09:50.238783 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 01:09:50.238789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 01:09:50.238795 | orchestrator | 2026-04-07 01:09:50.238800 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-07 01:09:50.238806 | orchestrator | Tuesday 07 April 2026 01:08:11 +0000 (0:00:04.780) 0:01:17.910 ********* 2026-04-07 01:09:50.238887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.238919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.238983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.238999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239037 | orchestrator | 2026-04-07 01:09:50.239051 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-07 01:09:50.239057 | orchestrator | Tuesday 07 April 2026 01:08:14 +0000 (0:00:03.503) 0:01:21.414 ********* 2026-04-07 01:09:50.239063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239211 | orchestrator | 2026-04-07 01:09:50.239217 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 01:09:50.239223 | orchestrator | Tuesday 07 April 2026 01:08:18 +0000 (0:00:03.652) 0:01:25.067 ********* 2026-04-07 01:09:50.239229 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.239236 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.239242 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.239248 | orchestrator | 2026-04-07 01:09:50.239254 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-07 01:09:50.239260 | orchestrator | Tuesday 07 April 2026 01:08:18 +0000 (0:00:00.353) 0:01:25.421 ********* 2026-04-07 01:09:50.239267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.239319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.239441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239453 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.239460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239500 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.239506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 01:09:50.239512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 01:09:50.239519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:09:50.239557 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.239563 | orchestrator | 2026-04-07 01:09:50.239569 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-07 01:09:50.239576 | orchestrator | Tuesday 07 April 2026 01:08:19 +0000 (0:00:00.832) 0:01:26.254 ********* 2026-04-07 01:09:50.239583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.239589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.239596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 01:09:50.239603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:09:50.239741 | orchestrator | 2026-04-07 01:09:50.239747 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 01:09:50.239753 | orchestrator | Tuesday 07 April 2026 01:08:25 +0000 (0:00:05.614) 0:01:31.868 ********* 2026-04-07 01:09:50.239759 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:09:50.239765 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:09:50.239771 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:09:50.239777 | orchestrator | 2026-04-07 01:09:50.239784 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-07 01:09:50.239794 | orchestrator | Tuesday 07 April 2026 01:08:25 +0000 (0:00:00.711) 0:01:32.580 ********* 2026-04-07 01:09:50.239801 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-07 01:09:50.239808 | orchestrator | 2026-04-07 01:09:50.239814 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-07 01:09:50.239820 | orchestrator | Tuesday 07 April 2026 01:08:27 +0000 (0:00:01.977) 0:01:34.557 ********* 2026-04-07 01:09:50.239826 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 01:09:50.239832 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-07 01:09:50.239838 | orchestrator | 2026-04-07 01:09:50.239843 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-07 01:09:50.239850 | orchestrator | Tuesday 07 April 2026 01:08:30 +0000 (0:00:02.048) 0:01:36.605 ********* 2026-04-07 01:09:50.239856 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.239861 | orchestrator | 2026-04-07 01:09:50.239868 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 01:09:50.239874 | orchestrator | Tuesday 07 April 2026 01:08:44 +0000 (0:00:14.751) 0:01:51.357 ********* 2026-04-07 01:09:50.239880 | orchestrator | 2026-04-07 01:09:50.239886 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 01:09:50.239892 | orchestrator | Tuesday 07 April 2026 01:08:44 +0000 (0:00:00.130) 0:01:51.487 ********* 2026-04-07 01:09:50.239898 | orchestrator | 2026-04-07 01:09:50.239904 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 01:09:50.239911 | orchestrator | Tuesday 07 April 2026 01:08:45 +0000 (0:00:00.148) 0:01:51.635 ********* 2026-04-07 01:09:50.239917 | orchestrator | 2026-04-07 01:09:50.239923 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-07 01:09:50.239929 | orchestrator | Tuesday 07 April 2026 01:08:45 +0000 (0:00:00.132) 0:01:51.768 ********* 2026-04-07 01:09:50.239935 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.239941 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.239947 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.239953 | orchestrator | 2026-04-07 01:09:50.239959 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-07 01:09:50.239965 | orchestrator | Tuesday 07 April 2026 01:08:57 +0000 (0:00:12.201) 0:02:03.969 ********* 2026-04-07 01:09:50.239971 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.239977 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.239988 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.239995 | orchestrator | 2026-04-07 01:09:50.240001 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-07 01:09:50.240007 | orchestrator | Tuesday 07 April 2026 01:09:06 +0000 (0:00:09.141) 0:02:13.111 ********* 2026-04-07 01:09:50.240013 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.240019 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.240026 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.240031 | orchestrator | 2026-04-07 01:09:50.240037 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-07 01:09:50.240043 | orchestrator | Tuesday 07 April 2026 01:09:12 +0000 (0:00:06.126) 0:02:19.237 ********* 2026-04-07 01:09:50.240049 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.240055 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.240060 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.240066 | orchestrator | 2026-04-07 01:09:50.240072 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-07 01:09:50.240078 | orchestrator | Tuesday 07 April 2026 01:09:24 +0000 (0:00:11.729) 0:02:30.966 ********* 2026-04-07 01:09:50.240085 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.240091 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.240097 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.240103 | orchestrator | 2026-04-07 01:09:50.240109 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-07 01:09:50.240116 | orchestrator | Tuesday 07 April 2026 01:09:35 +0000 (0:00:11.423) 0:02:42.390 ********* 2026-04-07 01:09:50.240122 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.240128 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:09:50.240133 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:09:50.240139 | orchestrator | 2026-04-07 01:09:50.240146 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-07 01:09:50.240152 | orchestrator | Tuesday 07 April 2026 01:09:41 +0000 (0:00:05.664) 0:02:48.054 ********* 2026-04-07 01:09:50.240157 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:09:50.240163 | orchestrator | 2026-04-07 01:09:50.240169 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:09:50.240176 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:09:50.240183 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:09:50.240188 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:09:50.240194 | orchestrator | 2026-04-07 01:09:50.240200 | orchestrator | 2026-04-07 01:09:50.240209 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:09:50.240215 | orchestrator | Tuesday 07 April 2026 01:09:47 +0000 (0:00:06.181) 0:02:54.235 ********* 2026-04-07 01:09:50.240221 | orchestrator | =============================================================================== 2026-04-07 01:09:50.240227 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.80s 2026-04-07 01:09:50.240233 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.75s 2026-04-07 01:09:50.240238 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.20s 2026-04-07 01:09:50.240245 | orchestrator | designate : Restart designate-producer container ----------------------- 11.73s 2026-04-07 01:09:50.240254 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.42s 2026-04-07 01:09:50.240260 | orchestrator | designate : Restart designate-api container ----------------------------- 9.14s 2026-04-07 01:09:50.240267 | orchestrator | designate : Copying over config.json files for services ----------------- 8.30s 2026-04-07 01:09:50.240273 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.90s 2026-04-07 01:09:50.240284 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.25s 2026-04-07 01:09:50.240290 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.18s 2026-04-07 01:09:50.240296 | orchestrator | designate : Restart designate-central container ------------------------- 6.13s 2026-04-07 01:09:50.240302 | orchestrator | designate : Restart designate-worker container -------------------------- 5.66s 2026-04-07 01:09:50.240308 | orchestrator | designate : Check designate containers ---------------------------------- 5.62s 2026-04-07 01:09:50.240314 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.78s 2026-04-07 01:09:50.240321 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.70s 2026-04-07 01:09:50.240326 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.38s 2026-04-07 01:09:50.240355 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.37s 2026-04-07 01:09:50.240362 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.05s 2026-04-07 01:09:50.240368 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.75s 2026-04-07 01:09:50.240375 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.65s 2026-04-07 01:09:50.240381 | orchestrator | 2026-04-07 01:09:50 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:09:50.240387 | orchestrator | 2026-04-07 01:09:50 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:50.240933 | orchestrator | 2026-04-07 01:09:50 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:50.240958 | orchestrator | 2026-04-07 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:53.281363 | orchestrator | 2026-04-07 01:09:53 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state STARTED 2026-04-07 01:09:53.283055 | orchestrator | 2026-04-07 01:09:53 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:09:53.284657 | orchestrator | 2026-04-07 01:09:53 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:53.285826 | orchestrator | 2026-04-07 01:09:53 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:53.286205 | orchestrator | 2026-04-07 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:56.337891 | orchestrator | 2026-04-07 01:09:56 | INFO  | Task a2dc4d62-ec93-4414-8ea4-cdad1fb23c35 is in state SUCCESS 2026-04-07 01:09:56.339634 | orchestrator | 2026-04-07 01:09:56 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:09:56.341803 | orchestrator | 2026-04-07 01:09:56 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:09:56.343942 | orchestrator | 2026-04-07 01:09:56 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:56.344939 | orchestrator | 2026-04-07 01:09:56 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:56.345578 | orchestrator | 2026-04-07 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:09:59.402215 | orchestrator | 2026-04-07 01:09:59 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:09:59.402636 | orchestrator | 2026-04-07 01:09:59 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:09:59.403793 | orchestrator | 2026-04-07 01:09:59 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:09:59.404749 | orchestrator | 2026-04-07 01:09:59 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:09:59.404791 | orchestrator | 2026-04-07 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:02.471226 | orchestrator | 2026-04-07 01:10:02 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:02.471272 | orchestrator | 2026-04-07 01:10:02 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:02.471811 | orchestrator | 2026-04-07 01:10:02 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:02.472516 | orchestrator | 2026-04-07 01:10:02 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:02.472677 | orchestrator | 2026-04-07 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:05.524051 | orchestrator | 2026-04-07 01:10:05 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:05.526003 | orchestrator | 2026-04-07 01:10:05 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:05.527757 | orchestrator | 2026-04-07 01:10:05 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:05.528179 | orchestrator | 2026-04-07 01:10:05 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:05.528197 | orchestrator | 2026-04-07 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:08.579402 | orchestrator | 2026-04-07 01:10:08 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:08.580928 | orchestrator | 2026-04-07 01:10:08 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:08.582764 | orchestrator | 2026-04-07 01:10:08 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:08.583835 | orchestrator | 2026-04-07 01:10:08 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:08.583862 | orchestrator | 2026-04-07 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:11.616735 | orchestrator | 2026-04-07 01:10:11 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:11.618464 | orchestrator | 2026-04-07 01:10:11 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:11.619444 | orchestrator | 2026-04-07 01:10:11 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:11.620147 | orchestrator | 2026-04-07 01:10:11 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:11.620176 | orchestrator | 2026-04-07 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:14.665947 | orchestrator | 2026-04-07 01:10:14 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:14.670285 | orchestrator | 2026-04-07 01:10:14 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:14.671370 | orchestrator | 2026-04-07 01:10:14 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:14.674828 | orchestrator | 2026-04-07 01:10:14 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:14.674886 | orchestrator | 2026-04-07 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:17.732237 | orchestrator | 2026-04-07 01:10:17 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:17.732657 | orchestrator | 2026-04-07 01:10:17 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:17.733914 | orchestrator | 2026-04-07 01:10:17 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:17.735169 | orchestrator | 2026-04-07 01:10:17 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:17.735196 | orchestrator | 2026-04-07 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:20.777582 | orchestrator | 2026-04-07 01:10:20 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:20.780605 | orchestrator | 2026-04-07 01:10:20 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:20.781602 | orchestrator | 2026-04-07 01:10:20 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:20.784541 | orchestrator | 2026-04-07 01:10:20 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:20.784619 | orchestrator | 2026-04-07 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:23.824060 | orchestrator | 2026-04-07 01:10:23 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:23.824750 | orchestrator | 2026-04-07 01:10:23 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:23.825856 | orchestrator | 2026-04-07 01:10:23 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:23.826587 | orchestrator | 2026-04-07 01:10:23 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:23.826619 | orchestrator | 2026-04-07 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:26.868447 | orchestrator | 2026-04-07 01:10:26 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:26.868852 | orchestrator | 2026-04-07 01:10:26 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:26.869947 | orchestrator | 2026-04-07 01:10:26 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:26.870853 | orchestrator | 2026-04-07 01:10:26 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:26.870881 | orchestrator | 2026-04-07 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:29.920666 | orchestrator | 2026-04-07 01:10:29 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:29.923612 | orchestrator | 2026-04-07 01:10:29 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:29.926643 | orchestrator | 2026-04-07 01:10:29 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:29.928938 | orchestrator | 2026-04-07 01:10:29 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:29.928977 | orchestrator | 2026-04-07 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:32.977578 | orchestrator | 2026-04-07 01:10:32 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:32.979706 | orchestrator | 2026-04-07 01:10:32 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:32.981742 | orchestrator | 2026-04-07 01:10:32 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:32.983637 | orchestrator | 2026-04-07 01:10:32 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:32.983675 | orchestrator | 2026-04-07 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:36.037640 | orchestrator | 2026-04-07 01:10:36 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:36.040109 | orchestrator | 2026-04-07 01:10:36 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:36.044784 | orchestrator | 2026-04-07 01:10:36 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:36.046347 | orchestrator | 2026-04-07 01:10:36 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:36.046390 | orchestrator | 2026-04-07 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:39.093025 | orchestrator | 2026-04-07 01:10:39 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:39.095400 | orchestrator | 2026-04-07 01:10:39 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:39.096733 | orchestrator | 2026-04-07 01:10:39 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:39.099181 | orchestrator | 2026-04-07 01:10:39 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:39.099423 | orchestrator | 2026-04-07 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:42.143519 | orchestrator | 2026-04-07 01:10:42 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:42.143577 | orchestrator | 2026-04-07 01:10:42 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:42.143584 | orchestrator | 2026-04-07 01:10:42 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:42.144570 | orchestrator | 2026-04-07 01:10:42 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:42.144660 | orchestrator | 2026-04-07 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:45.188621 | orchestrator | 2026-04-07 01:10:45 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:45.190370 | orchestrator | 2026-04-07 01:10:45 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:45.193345 | orchestrator | 2026-04-07 01:10:45 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:45.198150 | orchestrator | 2026-04-07 01:10:45 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:45.198192 | orchestrator | 2026-04-07 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:48.234421 | orchestrator | 2026-04-07 01:10:48 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:48.234787 | orchestrator | 2026-04-07 01:10:48 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:48.235609 | orchestrator | 2026-04-07 01:10:48 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:48.236677 | orchestrator | 2026-04-07 01:10:48 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state STARTED 2026-04-07 01:10:48.236707 | orchestrator | 2026-04-07 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:51.271422 | orchestrator | 2026-04-07 01:10:51 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:51.272090 | orchestrator | 2026-04-07 01:10:51 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:51.273048 | orchestrator | 2026-04-07 01:10:51 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:51.275138 | orchestrator | 2026-04-07 01:10:51 | INFO  | Task 0ab1e176-cadc-42b6-a604-27e14a8044f0 is in state SUCCESS 2026-04-07 01:10:51.276463 | orchestrator | 2026-04-07 01:10:51.276493 | orchestrator | 2026-04-07 01:10:51.276501 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-07 01:10:51.276508 | orchestrator | 2026-04-07 01:10:51.276515 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-07 01:10:51.276521 | orchestrator | Tuesday 07 April 2026 01:08:36 +0000 (0:00:00.187) 0:00:00.187 ********* 2026-04-07 01:10:51.276529 | orchestrator | changed: [localhost] 2026-04-07 01:10:51.276538 | orchestrator | 2026-04-07 01:10:51.276545 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-07 01:10:51.276551 | orchestrator | Tuesday 07 April 2026 01:08:38 +0000 (0:00:01.446) 0:00:01.633 ********* 2026-04-07 01:10:51.276559 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-07 01:10:51.276569 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-04-07 01:10:51.276579 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-04-07 01:10:51.276591 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs.sha256"} 2026-04-07 01:10:51.276603 | orchestrator | 2026-04-07 01:10:51.276616 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:10:51.276625 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-07 01:10:51.276633 | orchestrator | 2026-04-07 01:10:51.276639 | orchestrator | 2026-04-07 01:10:51.276646 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:10:51.276653 | orchestrator | Tuesday 07 April 2026 01:09:54 +0000 (0:01:16.381) 0:01:18.015 ********* 2026-04-07 01:10:51.276660 | orchestrator | =============================================================================== 2026-04-07 01:10:51.276667 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 76.38s 2026-04-07 01:10:51.276674 | orchestrator | Ensure the destination directory exists --------------------------------- 1.45s 2026-04-07 01:10:51.276680 | orchestrator | 2026-04-07 01:10:51.276684 | orchestrator | 2026-04-07 01:10:51.276687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:10:51.276691 | orchestrator | 2026-04-07 01:10:51.276695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:10:51.276700 | orchestrator | Tuesday 07 April 2026 01:06:23 +0000 (0:00:00.318) 0:00:00.318 ********* 2026-04-07 01:10:51.276707 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:10:51.276714 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:10:51.276720 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:10:51.276726 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:10:51.276732 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:10:51.276739 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:10:51.276746 | orchestrator | 2026-04-07 01:10:51.276752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:10:51.276759 | orchestrator | Tuesday 07 April 2026 01:06:23 +0000 (0:00:00.560) 0:00:00.878 ********* 2026-04-07 01:10:51.276766 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-07 01:10:51.276770 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-07 01:10:51.276774 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-07 01:10:51.276778 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-07 01:10:51.276782 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-07 01:10:51.276786 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-07 01:10:51.276789 | orchestrator | 2026-04-07 01:10:51.276793 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-07 01:10:51.276808 | orchestrator | 2026-04-07 01:10:51.276811 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 01:10:51.276815 | orchestrator | Tuesday 07 April 2026 01:06:24 +0000 (0:00:00.740) 0:00:01.618 ********* 2026-04-07 01:10:51.276819 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:10:51.276823 | orchestrator | 2026-04-07 01:10:51.276833 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-07 01:10:51.276838 | orchestrator | Tuesday 07 April 2026 01:06:25 +0000 (0:00:01.088) 0:00:02.707 ********* 2026-04-07 01:10:51.276841 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:10:51.276845 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:10:51.276907 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:10:51.276913 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:10:51.276916 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:10:51.276920 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:10:51.276924 | orchestrator | 2026-04-07 01:10:51.276928 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-07 01:10:51.276931 | orchestrator | Tuesday 07 April 2026 01:06:27 +0000 (0:00:01.541) 0:00:04.248 ********* 2026-04-07 01:10:51.276935 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:10:51.276939 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:10:51.276943 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:10:51.276946 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:10:51.276972 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:10:51.276976 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:10:51.276980 | orchestrator | 2026-04-07 01:10:51.276984 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-07 01:10:51.276988 | orchestrator | Tuesday 07 April 2026 01:06:29 +0000 (0:00:01.820) 0:00:06.069 ********* 2026-04-07 01:10:51.276992 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 01:10:51.276996 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277000 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277004 | orchestrator | } 2026-04-07 01:10:51.277016 | orchestrator | ok: [testbed-node-1] => { 2026-04-07 01:10:51.277020 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277024 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277028 | orchestrator | } 2026-04-07 01:10:51.277032 | orchestrator | ok: [testbed-node-2] => { 2026-04-07 01:10:51.277036 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277039 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277043 | orchestrator | } 2026-04-07 01:10:51.277047 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 01:10:51.277051 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277055 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277058 | orchestrator | } 2026-04-07 01:10:51.277062 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 01:10:51.277066 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277070 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277074 | orchestrator | } 2026-04-07 01:10:51.277077 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 01:10:51.277081 | orchestrator |  "changed": false, 2026-04-07 01:10:51.277085 | orchestrator |  "msg": "All assertions passed" 2026-04-07 01:10:51.277089 | orchestrator | } 2026-04-07 01:10:51.277092 | orchestrator | 2026-04-07 01:10:51.277096 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-07 01:10:51.277100 | orchestrator | Tuesday 07 April 2026 01:06:31 +0000 (0:00:02.179) 0:00:08.248 ********* 2026-04-07 01:10:51.277104 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.277108 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.277112 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.277115 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.277150 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.277155 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.277164 | orchestrator | 2026-04-07 01:10:51.277167 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-07 01:10:51.277171 | orchestrator | Tuesday 07 April 2026 01:06:32 +0000 (0:00:01.441) 0:00:09.690 ********* 2026-04-07 01:10:51.277175 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-07 01:10:51.277412 | orchestrator | 2026-04-07 01:10:51.277417 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-07 01:10:51.277421 | orchestrator | Tuesday 07 April 2026 01:06:35 +0000 (0:00:03.045) 0:00:12.735 ********* 2026-04-07 01:10:51.277425 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-07 01:10:51.277430 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-07 01:10:51.277433 | orchestrator | 2026-04-07 01:10:51.277437 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-07 01:10:51.277441 | orchestrator | Tuesday 07 April 2026 01:06:41 +0000 (0:00:05.217) 0:00:17.953 ********* 2026-04-07 01:10:51.277445 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:10:51.277470 | orchestrator | 2026-04-07 01:10:51.277474 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-07 01:10:51.277478 | orchestrator | Tuesday 07 April 2026 01:06:43 +0000 (0:00:02.657) 0:00:20.611 ********* 2026-04-07 01:10:51.277482 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-07 01:10:51.277486 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:10:51.277490 | orchestrator | 2026-04-07 01:10:51.277494 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-07 01:10:51.277498 | orchestrator | Tuesday 07 April 2026 01:06:47 +0000 (0:00:03.333) 0:00:23.944 ********* 2026-04-07 01:10:51.277502 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:10:51.277505 | orchestrator | 2026-04-07 01:10:51.277509 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-07 01:10:51.277513 | orchestrator | Tuesday 07 April 2026 01:06:49 +0000 (0:00:02.943) 0:00:26.888 ********* 2026-04-07 01:10:51.277517 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-07 01:10:51.277521 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-07 01:10:51.277525 | orchestrator | 2026-04-07 01:10:51.277528 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 01:10:51.277532 | orchestrator | Tuesday 07 April 2026 01:06:56 +0000 (0:00:06.656) 0:00:33.545 ********* 2026-04-07 01:10:51.277536 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.277540 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.277544 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.277548 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.277551 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.277555 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.277559 | orchestrator | 2026-04-07 01:10:51.277567 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-07 01:10:51.277571 | orchestrator | Tuesday 07 April 2026 01:06:57 +0000 (0:00:00.542) 0:00:34.087 ********* 2026-04-07 01:10:51.277575 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.277579 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.277583 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.277587 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.277591 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.277594 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.277598 | orchestrator | 2026-04-07 01:10:51.277602 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-07 01:10:51.277606 | orchestrator | Tuesday 07 April 2026 01:06:59 +0000 (0:00:02.313) 0:00:36.401 ********* 2026-04-07 01:10:51.277610 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:10:51.277614 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:10:51.277622 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:10:51.277626 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:10:51.277629 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:10:51.277633 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:10:51.277637 | orchestrator | 2026-04-07 01:10:51.277641 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-07 01:10:51.277645 | orchestrator | Tuesday 07 April 2026 01:07:00 +0000 (0:00:01.095) 0:00:37.496 ********* 2026-04-07 01:10:51.277648 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.277652 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.277656 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.277676 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.277681 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.277685 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.277689 | orchestrator | 2026-04-07 01:10:51.277693 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-07 01:10:51.277697 | orchestrator | Tuesday 07 April 2026 01:07:02 +0000 (0:00:02.199) 0:00:39.696 ********* 2026-04-07 01:10:51.277702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277747 | orchestrator | 2026-04-07 01:10:51.277751 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-07 01:10:51.277755 | orchestrator | Tuesday 07 April 2026 01:07:05 +0000 (0:00:02.449) 0:00:42.145 ********* 2026-04-07 01:10:51.277759 | orchestrator | [WARNING]: Skipped 2026-04-07 01:10:51.277763 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-07 01:10:51.277767 | orchestrator | due to this access issue: 2026-04-07 01:10:51.277771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-07 01:10:51.277775 | orchestrator | a directory 2026-04-07 01:10:51.277778 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:10:51.277782 | orchestrator | 2026-04-07 01:10:51.277786 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 01:10:51.277790 | orchestrator | Tuesday 07 April 2026 01:07:06 +0000 (0:00:00.861) 0:00:43.006 ********* 2026-04-07 01:10:51.277794 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:10:51.277798 | orchestrator | 2026-04-07 01:10:51.277802 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-07 01:10:51.277806 | orchestrator | Tuesday 07 April 2026 01:07:07 +0000 (0:00:01.214) 0:00:44.221 ********* 2026-04-07 01:10:51.277810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.277843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.277858 | orchestrator | 2026-04-07 01:10:51.277862 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-07 01:10:51.277866 | orchestrator | Tuesday 07 April 2026 01:07:10 +0000 (0:00:03.281) 0:00:47.503 ********* 2026-04-07 01:10:51.277872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.277876 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.277891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.277898 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.277905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.277912 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.277919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.277929 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.277938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.277946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.277953 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.277959 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.277966 | orchestrator | 2026-04-07 01:10:51.277995 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-07 01:10:51.278003 | orchestrator | Tuesday 07 April 2026 01:07:13 +0000 (0:00:02.591) 0:00:50.094 ********* 2026-04-07 01:10:51.278055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278071 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278107 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278125 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278153 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278181 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278206 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278216 | orchestrator | 2026-04-07 01:10:51.278226 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-07 01:10:51.278237 | orchestrator | Tuesday 07 April 2026 01:07:16 +0000 (0:00:03.648) 0:00:53.743 ********* 2026-04-07 01:10:51.278243 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278250 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278261 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278269 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278275 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278281 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278288 | orchestrator | 2026-04-07 01:10:51.278294 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-07 01:10:51.278301 | orchestrator | Tuesday 07 April 2026 01:07:20 +0000 (0:00:03.182) 0:00:56.926 ********* 2026-04-07 01:10:51.278327 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278336 | orchestrator | 2026-04-07 01:10:51.278344 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-07 01:10:51.278352 | orchestrator | Tuesday 07 April 2026 01:07:20 +0000 (0:00:00.389) 0:00:57.315 ********* 2026-04-07 01:10:51.278358 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278367 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278381 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278389 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278396 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278405 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278411 | orchestrator | 2026-04-07 01:10:51.278419 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-07 01:10:51.278428 | orchestrator | Tuesday 07 April 2026 01:07:21 +0000 (0:00:00.937) 0:00:58.253 ********* 2026-04-07 01:10:51.278442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278452 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278469 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278503 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278519 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278532 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278549 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278562 | orchestrator | 2026-04-07 01:10:51.278568 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-07 01:10:51.278574 | orchestrator | Tuesday 07 April 2026 01:07:24 +0000 (0:00:03.150) 0:01:01.403 ********* 2026-04-07 01:10:51.278586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278647 | orchestrator | 2026-04-07 01:10:51.278652 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-07 01:10:51.278656 | orchestrator | Tuesday 07 April 2026 01:07:28 +0000 (0:00:03.853) 0:01:05.257 ********* 2026-04-07 01:10:51.278660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.278693 | orchestrator | 2026-04-07 01:10:51.278697 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-07 01:10:51.278701 | orchestrator | Tuesday 07 April 2026 01:07:35 +0000 (0:00:07.348) 0:01:12.606 ********* 2026-04-07 01:10:51.278705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278709 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278721 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.278742 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278755 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278769 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278782 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278789 | orchestrator | 2026-04-07 01:10:51.278795 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-07 01:10:51.278802 | orchestrator | Tuesday 07 April 2026 01:07:38 +0000 (0:00:02.569) 0:01:15.175 ********* 2026-04-07 01:10:51.278809 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278814 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278818 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278822 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:51.278826 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:10:51.278830 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:10:51.278833 | orchestrator | 2026-04-07 01:10:51.278837 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-07 01:10:51.278841 | orchestrator | Tuesday 07 April 2026 01:07:42 +0000 (0:00:03.735) 0:01:18.910 ********* 2026-04-07 01:10:51.278848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278856 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278869 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.278877 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.278898 | orchestrator | 2026-04-07 01:10:51.278902 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-07 01:10:51.278906 | orchestrator | Tuesday 07 April 2026 01:07:47 +0000 (0:00:05.279) 0:01:24.190 ********* 2026-04-07 01:10:51.278912 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278916 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278919 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278923 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278927 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278931 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278935 | orchestrator | 2026-04-07 01:10:51.278938 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-07 01:10:51.278942 | orchestrator | Tuesday 07 April 2026 01:07:49 +0000 (0:00:02.631) 0:01:26.821 ********* 2026-04-07 01:10:51.278946 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278951 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278954 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278958 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.278962 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278966 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.278969 | orchestrator | 2026-04-07 01:10:51.278973 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-07 01:10:51.278977 | orchestrator | Tuesday 07 April 2026 01:07:52 +0000 (0:00:02.509) 0:01:29.330 ********* 2026-04-07 01:10:51.278981 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.278985 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.278988 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.278992 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.278996 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279000 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279004 | orchestrator | 2026-04-07 01:10:51.279008 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-07 01:10:51.279012 | orchestrator | Tuesday 07 April 2026 01:07:54 +0000 (0:00:02.086) 0:01:31.416 ********* 2026-04-07 01:10:51.279016 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279019 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279024 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279028 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279032 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279035 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279039 | orchestrator | 2026-04-07 01:10:51.279043 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-07 01:10:51.279047 | orchestrator | Tuesday 07 April 2026 01:07:56 +0000 (0:00:01.893) 0:01:33.310 ********* 2026-04-07 01:10:51.279051 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279055 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279058 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279062 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279066 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279073 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279076 | orchestrator | 2026-04-07 01:10:51.279080 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-07 01:10:51.279084 | orchestrator | Tuesday 07 April 2026 01:07:58 +0000 (0:00:02.098) 0:01:35.408 ********* 2026-04-07 01:10:51.279088 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279092 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279096 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279099 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279103 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279107 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279111 | orchestrator | 2026-04-07 01:10:51.279115 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-07 01:10:51.279118 | orchestrator | Tuesday 07 April 2026 01:08:00 +0000 (0:00:02.230) 0:01:37.638 ********* 2026-04-07 01:10:51.279122 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279126 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279130 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279134 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279138 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279141 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279145 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279149 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279153 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279158 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279166 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 01:10:51.279172 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279178 | orchestrator | 2026-04-07 01:10:51.279183 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-07 01:10:51.279190 | orchestrator | Tuesday 07 April 2026 01:08:02 +0000 (0:00:01.809) 0:01:39.448 ********* 2026-04-07 01:10:51.279199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279206 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279222 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279236 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279245 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279255 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279266 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279270 | orchestrator | 2026-04-07 01:10:51.279274 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-07 01:10:51.279278 | orchestrator | Tuesday 07 April 2026 01:08:05 +0000 (0:00:02.667) 0:01:42.116 ********* 2026-04-07 01:10:51.279282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279289 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279297 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279334 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279344 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279359 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279367 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279371 | orchestrator | 2026-04-07 01:10:51.279375 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-07 01:10:51.279379 | orchestrator | Tuesday 07 April 2026 01:08:07 +0000 (0:00:02.602) 0:01:44.718 ********* 2026-04-07 01:10:51.279383 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279386 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279390 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279394 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279397 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279401 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279405 | orchestrator | 2026-04-07 01:10:51.279409 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-07 01:10:51.279413 | orchestrator | Tuesday 07 April 2026 01:08:11 +0000 (0:00:03.185) 0:01:47.904 ********* 2026-04-07 01:10:51.279416 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279420 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279424 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279428 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:10:51.279432 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:10:51.279435 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:10:51.279440 | orchestrator | 2026-04-07 01:10:51.279444 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-07 01:10:51.279448 | orchestrator | Tuesday 07 April 2026 01:08:14 +0000 (0:00:03.398) 0:01:51.302 ********* 2026-04-07 01:10:51.279451 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279455 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279459 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279463 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279467 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279470 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279474 | orchestrator | 2026-04-07 01:10:51.279478 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-07 01:10:51.279482 | orchestrator | Tuesday 07 April 2026 01:08:16 +0000 (0:00:02.350) 0:01:53.653 ********* 2026-04-07 01:10:51.279486 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279489 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279493 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279497 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279501 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279504 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279509 | orchestrator | 2026-04-07 01:10:51.279517 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-07 01:10:51.279524 | orchestrator | Tuesday 07 April 2026 01:08:19 +0000 (0:00:02.722) 0:01:56.375 ********* 2026-04-07 01:10:51.279528 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279531 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279535 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279539 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279543 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279547 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279551 | orchestrator | 2026-04-07 01:10:51.279554 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-07 01:10:51.279560 | orchestrator | Tuesday 07 April 2026 01:08:22 +0000 (0:00:02.573) 0:01:58.949 ********* 2026-04-07 01:10:51.279566 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279571 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279575 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279579 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279583 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279587 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279591 | orchestrator | 2026-04-07 01:10:51.279595 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-07 01:10:51.279599 | orchestrator | Tuesday 07 April 2026 01:08:24 +0000 (0:00:02.744) 0:02:01.694 ********* 2026-04-07 01:10:51.279603 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279606 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279612 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279616 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279620 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279624 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279627 | orchestrator | 2026-04-07 01:10:51.279631 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-07 01:10:51.279635 | orchestrator | Tuesday 07 April 2026 01:08:27 +0000 (0:00:02.243) 0:02:03.938 ********* 2026-04-07 01:10:51.279639 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279643 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279646 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279650 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279654 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279658 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279661 | orchestrator | 2026-04-07 01:10:51.279665 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-07 01:10:51.279669 | orchestrator | Tuesday 07 April 2026 01:08:28 +0000 (0:00:01.792) 0:02:05.730 ********* 2026-04-07 01:10:51.279673 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279677 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279681 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279684 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279688 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279692 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279696 | orchestrator | 2026-04-07 01:10:51.279699 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-07 01:10:51.279703 | orchestrator | Tuesday 07 April 2026 01:08:31 +0000 (0:00:02.778) 0:02:08.508 ********* 2026-04-07 01:10:51.279707 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279713 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279719 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279723 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279727 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279730 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279734 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279741 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279745 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279749 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279752 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 01:10:51.279756 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279760 | orchestrator | 2026-04-07 01:10:51.279764 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-07 01:10:51.279768 | orchestrator | Tuesday 07 April 2026 01:08:34 +0000 (0:00:02.637) 0:02:11.146 ********* 2026-04-07 01:10:51.279772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279776 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279786 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279798 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 01:10:51.279808 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279816 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 01:10:51.279825 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279829 | orchestrator | 2026-04-07 01:10:51.279833 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-07 01:10:51.279837 | orchestrator | Tuesday 07 April 2026 01:08:37 +0000 (0:00:02.912) 0:02:14.059 ********* 2026-04-07 01:10:51.279844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.279848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.279855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.279859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.279865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 01:10:51.279872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 01:10:51.279876 | orchestrator | 2026-04-07 01:10:51.279880 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 01:10:51.279884 | orchestrator | Tuesday 07 April 2026 01:08:39 +0000 (0:00:02.826) 0:02:16.885 ********* 2026-04-07 01:10:51.279888 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:51.279894 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:51.279900 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:51.279906 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:10:51.279912 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:10:51.279919 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:10:51.279925 | orchestrator | 2026-04-07 01:10:51.279930 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-07 01:10:51.279942 | orchestrator | Tuesday 07 April 2026 01:08:40 +0000 (0:00:00.593) 0:02:17.479 ********* 2026-04-07 01:10:51.279948 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:51.279954 | orchestrator | 2026-04-07 01:10:51.279959 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-07 01:10:51.279966 | orchestrator | Tuesday 07 April 2026 01:08:42 +0000 (0:00:01.872) 0:02:19.351 ********* 2026-04-07 01:10:51.279973 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:51.279979 | orchestrator | 2026-04-07 01:10:51.279986 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-07 01:10:51.279992 | orchestrator | Tuesday 07 April 2026 01:08:44 +0000 (0:00:01.915) 0:02:21.267 ********* 2026-04-07 01:10:51.279998 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:51.280002 | orchestrator | 2026-04-07 01:10:51.280006 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280009 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:53.069) 0:03:14.337 ********* 2026-04-07 01:10:51.280013 | orchestrator | 2026-04-07 01:10:51.280017 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280021 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.081) 0:03:14.418 ********* 2026-04-07 01:10:51.280025 | orchestrator | 2026-04-07 01:10:51.280028 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280032 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.064) 0:03:14.483 ********* 2026-04-07 01:10:51.280036 | orchestrator | 2026-04-07 01:10:51.280040 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280043 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.064) 0:03:14.548 ********* 2026-04-07 01:10:51.280047 | orchestrator | 2026-04-07 01:10:51.280051 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280055 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.066) 0:03:14.614 ********* 2026-04-07 01:10:51.280058 | orchestrator | 2026-04-07 01:10:51.280062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 01:10:51.280066 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.063) 0:03:14.677 ********* 2026-04-07 01:10:51.280070 | orchestrator | 2026-04-07 01:10:51.280074 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-07 01:10:51.280078 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:00.118) 0:03:14.795 ********* 2026-04-07 01:10:51.280081 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:51.280085 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:10:51.280089 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:10:51.280093 | orchestrator | 2026-04-07 01:10:51.280096 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-07 01:10:51.280100 | orchestrator | Tuesday 07 April 2026 01:10:05 +0000 (0:00:27.251) 0:03:42.047 ********* 2026-04-07 01:10:51.280104 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:10:51.280108 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:10:51.280111 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:10:51.280115 | orchestrator | 2026-04-07 01:10:51.280119 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:10:51.280124 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 01:10:51.280128 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-07 01:10:51.280132 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-07 01:10:51.280138 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 01:10:51.280146 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 01:10:51.280149 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 01:10:51.280153 | orchestrator | 2026-04-07 01:10:51.280157 | orchestrator | 2026-04-07 01:10:51.280161 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:10:51.280165 | orchestrator | Tuesday 07 April 2026 01:10:49 +0000 (0:00:44.708) 0:04:26.755 ********* 2026-04-07 01:10:51.280168 | orchestrator | =============================================================================== 2026-04-07 01:10:51.280172 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 53.07s 2026-04-07 01:10:51.280176 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 44.71s 2026-04-07 01:10:51.280180 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.25s 2026-04-07 01:10:51.280186 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.35s 2026-04-07 01:10:51.280190 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.66s 2026-04-07 01:10:51.280194 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.28s 2026-04-07 01:10:51.280198 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.22s 2026-04-07 01:10:51.280202 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.86s 2026-04-07 01:10:51.280206 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.74s 2026-04-07 01:10:51.280209 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.65s 2026-04-07 01:10:51.280213 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.40s 2026-04-07 01:10:51.280217 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.33s 2026-04-07 01:10:51.280220 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.28s 2026-04-07 01:10:51.280224 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.19s 2026-04-07 01:10:51.280228 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.18s 2026-04-07 01:10:51.280232 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.15s 2026-04-07 01:10:51.280235 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.04s 2026-04-07 01:10:51.280239 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 2.94s 2026-04-07 01:10:51.280243 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.91s 2026-04-07 01:10:51.280247 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.83s 2026-04-07 01:10:51.280250 | orchestrator | 2026-04-07 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:54.377028 | orchestrator | 2026-04-07 01:10:54 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:10:54.377114 | orchestrator | 2026-04-07 01:10:54 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state STARTED 2026-04-07 01:10:54.377472 | orchestrator | 2026-04-07 01:10:54 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:54.378052 | orchestrator | 2026-04-07 01:10:54 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:54.378084 | orchestrator | 2026-04-07 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:10:57.422854 | orchestrator | 2026-04-07 01:10:57 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:10:57.423326 | orchestrator | 2026-04-07 01:10:57 | INFO  | Task 6dff7848-7ac5-4ca7-a084-0874c5ebeea0 is in state SUCCESS 2026-04-07 01:10:57.424719 | orchestrator | 2026-04-07 01:10:57.424778 | orchestrator | 2026-04-07 01:10:57.424789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:10:57.424797 | orchestrator | 2026-04-07 01:10:57.424803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:10:57.424810 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.374) 0:00:00.374 ********* 2026-04-07 01:10:57.424814 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:10:57.424819 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:10:57.424824 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:10:57.424830 | orchestrator | 2026-04-07 01:10:57.424837 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:10:57.424843 | orchestrator | Tuesday 07 April 2026 01:09:51 +0000 (0:00:00.311) 0:00:00.685 ********* 2026-04-07 01:10:57.424850 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-07 01:10:57.424856 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-07 01:10:57.424862 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-07 01:10:57.424868 | orchestrator | 2026-04-07 01:10:57.424873 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-07 01:10:57.424879 | orchestrator | 2026-04-07 01:10:57.424900 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 01:10:57.424906 | orchestrator | Tuesday 07 April 2026 01:09:51 +0000 (0:00:00.414) 0:00:01.100 ********* 2026-04-07 01:10:57.424913 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:10:57.424921 | orchestrator | 2026-04-07 01:10:57.424927 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-07 01:10:57.424934 | orchestrator | Tuesday 07 April 2026 01:09:52 +0000 (0:00:00.726) 0:00:01.826 ********* 2026-04-07 01:10:57.424940 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-07 01:10:57.424946 | orchestrator | 2026-04-07 01:10:57.424953 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-07 01:10:57.424959 | orchestrator | Tuesday 07 April 2026 01:09:56 +0000 (0:00:03.713) 0:00:05.540 ********* 2026-04-07 01:10:57.424965 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-07 01:10:57.424972 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-07 01:10:57.424978 | orchestrator | 2026-04-07 01:10:57.425075 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-07 01:10:57.425082 | orchestrator | Tuesday 07 April 2026 01:10:02 +0000 (0:00:06.190) 0:00:11.730 ********* 2026-04-07 01:10:57.425086 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:10:57.425090 | orchestrator | 2026-04-07 01:10:57.425094 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-07 01:10:57.425098 | orchestrator | Tuesday 07 April 2026 01:10:05 +0000 (0:00:02.981) 0:00:14.712 ********* 2026-04-07 01:10:57.425101 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-07 01:10:57.425105 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:10:57.425109 | orchestrator | 2026-04-07 01:10:57.425113 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-07 01:10:57.425116 | orchestrator | Tuesday 07 April 2026 01:10:08 +0000 (0:00:03.553) 0:00:18.265 ********* 2026-04-07 01:10:57.425120 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:10:57.425124 | orchestrator | 2026-04-07 01:10:57.425128 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-07 01:10:57.425132 | orchestrator | Tuesday 07 April 2026 01:10:12 +0000 (0:00:03.152) 0:00:21.417 ********* 2026-04-07 01:10:57.425135 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-07 01:10:57.425156 | orchestrator | 2026-04-07 01:10:57.425160 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 01:10:57.425163 | orchestrator | Tuesday 07 April 2026 01:10:15 +0000 (0:00:03.573) 0:00:24.990 ********* 2026-04-07 01:10:57.425167 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425171 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:57.425175 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:57.425181 | orchestrator | 2026-04-07 01:10:57.425187 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-07 01:10:57.425202 | orchestrator | Tuesday 07 April 2026 01:10:16 +0000 (0:00:00.696) 0:00:25.687 ********* 2026-04-07 01:10:57.425216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425259 | orchestrator | 2026-04-07 01:10:57.425265 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-07 01:10:57.425271 | orchestrator | Tuesday 07 April 2026 01:10:18 +0000 (0:00:01.774) 0:00:27.461 ********* 2026-04-07 01:10:57.425277 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425284 | orchestrator | 2026-04-07 01:10:57.425290 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-07 01:10:57.425296 | orchestrator | Tuesday 07 April 2026 01:10:18 +0000 (0:00:00.140) 0:00:27.602 ********* 2026-04-07 01:10:57.425302 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425359 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:57.425366 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:57.425372 | orchestrator | 2026-04-07 01:10:57.425378 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 01:10:57.425384 | orchestrator | Tuesday 07 April 2026 01:10:18 +0000 (0:00:00.284) 0:00:27.886 ********* 2026-04-07 01:10:57.425390 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:10:57.425397 | orchestrator | 2026-04-07 01:10:57.425402 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-07 01:10:57.425408 | orchestrator | Tuesday 07 April 2026 01:10:19 +0000 (0:00:00.672) 0:00:28.559 ********* 2026-04-07 01:10:57.425414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425459 | orchestrator | 2026-04-07 01:10:57.425463 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-07 01:10:57.425467 | orchestrator | Tuesday 07 April 2026 01:10:20 +0000 (0:00:01.495) 0:00:30.055 ********* 2026-04-07 01:10:57.425471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425480 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425488 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:57.425495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425501 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:57.425506 | orchestrator | 2026-04-07 01:10:57.425513 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-07 01:10:57.425518 | orchestrator | Tuesday 07 April 2026 01:10:21 +0000 (0:00:00.444) 0:00:30.499 ********* 2026-04-07 01:10:57.425532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425541 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425560 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:57.425566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425571 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:57.425577 | orchestrator | 2026-04-07 01:10:57.425583 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-07 01:10:57.425589 | orchestrator | Tuesday 07 April 2026 01:10:21 +0000 (0:00:00.678) 0:00:31.177 ********* 2026-04-07 01:10:57.425600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425627 | orchestrator | 2026-04-07 01:10:57.425633 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-07 01:10:57.425638 | orchestrator | Tuesday 07 April 2026 01:10:23 +0000 (0:00:01.395) 0:00:32.572 ********* 2026-04-07 01:10:57.425643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425669 | orchestrator | 2026-04-07 01:10:57.425674 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-07 01:10:57.425680 | orchestrator | Tuesday 07 April 2026 01:10:25 +0000 (0:00:01.955) 0:00:34.528 ********* 2026-04-07 01:10:57.425695 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 01:10:57.425703 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 01:10:57.425708 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 01:10:57.425715 | orchestrator | 2026-04-07 01:10:57.425722 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-07 01:10:57.425728 | orchestrator | Tuesday 07 April 2026 01:10:26 +0000 (0:00:01.340) 0:00:35.869 ********* 2026-04-07 01:10:57.425735 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:10:57.425741 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:57.425748 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:10:57.425756 | orchestrator | 2026-04-07 01:10:57.425761 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-07 01:10:57.425765 | orchestrator | Tuesday 07 April 2026 01:10:27 +0000 (0:00:01.150) 0:00:37.019 ********* 2026-04-07 01:10:57.425770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425775 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:10:57.425780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425784 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:10:57.425794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 01:10:57.425799 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:10:57.425809 | orchestrator | 2026-04-07 01:10:57.425813 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-07 01:10:57.425818 | orchestrator | Tuesday 07 April 2026 01:10:28 +0000 (0:00:00.683) 0:00:37.703 ********* 2026-04-07 01:10:57.425829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 01:10:57.425843 | orchestrator | 2026-04-07 01:10:57.425848 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-07 01:10:57.425852 | orchestrator | Tuesday 07 April 2026 01:10:29 +0000 (0:00:01.010) 0:00:38.714 ********* 2026-04-07 01:10:57.425857 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:57.425861 | orchestrator | 2026-04-07 01:10:57.425866 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-07 01:10:57.425870 | orchestrator | Tuesday 07 April 2026 01:10:31 +0000 (0:00:02.160) 0:00:40.874 ********* 2026-04-07 01:10:57.425875 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:57.425879 | orchestrator | 2026-04-07 01:10:57.425920 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-07 01:10:57.425926 | orchestrator | Tuesday 07 April 2026 01:10:33 +0000 (0:00:02.263) 0:00:43.138 ********* 2026-04-07 01:10:57.425930 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:57.425935 | orchestrator | 2026-04-07 01:10:57.425940 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 01:10:57.425948 | orchestrator | Tuesday 07 April 2026 01:10:47 +0000 (0:00:14.034) 0:00:57.172 ********* 2026-04-07 01:10:57.425952 | orchestrator | 2026-04-07 01:10:57.425957 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 01:10:57.425962 | orchestrator | Tuesday 07 April 2026 01:10:47 +0000 (0:00:00.068) 0:00:57.241 ********* 2026-04-07 01:10:57.425967 | orchestrator | 2026-04-07 01:10:57.425975 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 01:10:57.425980 | orchestrator | Tuesday 07 April 2026 01:10:47 +0000 (0:00:00.063) 0:00:57.304 ********* 2026-04-07 01:10:57.425984 | orchestrator | 2026-04-07 01:10:57.425989 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-07 01:10:57.425993 | orchestrator | Tuesday 07 April 2026 01:10:48 +0000 (0:00:00.080) 0:00:57.385 ********* 2026-04-07 01:10:57.425998 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:10:57.426002 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:10:57.426007 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:10:57.426052 | orchestrator | 2026-04-07 01:10:57.426059 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:10:57.426065 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:10:57.426072 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:10:57.426081 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:10:57.426086 | orchestrator | 2026-04-07 01:10:57.426090 | orchestrator | 2026-04-07 01:10:57.426094 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:10:57.426098 | orchestrator | Tuesday 07 April 2026 01:10:56 +0000 (0:00:08.328) 0:01:05.713 ********* 2026-04-07 01:10:57.426102 | orchestrator | =============================================================================== 2026-04-07 01:10:57.426106 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.03s 2026-04-07 01:10:57.426110 | orchestrator | placement : Restart placement-api container ----------------------------- 8.33s 2026-04-07 01:10:57.426114 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.19s 2026-04-07 01:10:57.426117 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.71s 2026-04-07 01:10:57.426121 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.57s 2026-04-07 01:10:57.426125 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.55s 2026-04-07 01:10:57.426129 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.15s 2026-04-07 01:10:57.426133 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.98s 2026-04-07 01:10:57.426137 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.26s 2026-04-07 01:10:57.426140 | orchestrator | placement : Creating placement databases -------------------------------- 2.16s 2026-04-07 01:10:57.426144 | orchestrator | placement : Copying over placement.conf --------------------------------- 1.96s 2026-04-07 01:10:57.426148 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.77s 2026-04-07 01:10:57.426152 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.50s 2026-04-07 01:10:57.426155 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2026-04-07 01:10:57.426159 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.34s 2026-04-07 01:10:57.426163 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.15s 2026-04-07 01:10:57.426166 | orchestrator | placement : Check placement containers ---------------------------------- 1.01s 2026-04-07 01:10:57.426170 | orchestrator | placement : include_tasks ----------------------------------------------- 0.73s 2026-04-07 01:10:57.426178 | orchestrator | placement : include_tasks ----------------------------------------------- 0.70s 2026-04-07 01:10:57.426182 | orchestrator | placement : Copying over existing policy file --------------------------- 0.68s 2026-04-07 01:10:57.426186 | orchestrator | 2026-04-07 01:10:57 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:10:57.426595 | orchestrator | 2026-04-07 01:10:57 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:10:57.426622 | orchestrator | 2026-04-07 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:00.461990 | orchestrator | 2026-04-07 01:11:00 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:00.462484 | orchestrator | 2026-04-07 01:11:00 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:00.464764 | orchestrator | 2026-04-07 01:11:00 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:00.465851 | orchestrator | 2026-04-07 01:11:00 | INFO  | Task 04cb2e43-3133-4ffc-bce8-3a188718c51e is in state STARTED 2026-04-07 01:11:00.466398 | orchestrator | 2026-04-07 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:03.504866 | orchestrator | 2026-04-07 01:11:03 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:03.506220 | orchestrator | 2026-04-07 01:11:03 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:03.509123 | orchestrator | 2026-04-07 01:11:03 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:03.511619 | orchestrator | 2026-04-07 01:11:03 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:03.513017 | orchestrator | 2026-04-07 01:11:03 | INFO  | Task 04cb2e43-3133-4ffc-bce8-3a188718c51e is in state SUCCESS 2026-04-07 01:11:03.513118 | orchestrator | 2026-04-07 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:06.553798 | orchestrator | 2026-04-07 01:11:06 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:06.556606 | orchestrator | 2026-04-07 01:11:06 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:06.557264 | orchestrator | 2026-04-07 01:11:06 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:06.559053 | orchestrator | 2026-04-07 01:11:06 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:06.559136 | orchestrator | 2026-04-07 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:09.603990 | orchestrator | 2026-04-07 01:11:09 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:09.605226 | orchestrator | 2026-04-07 01:11:09 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:09.606170 | orchestrator | 2026-04-07 01:11:09 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:09.607315 | orchestrator | 2026-04-07 01:11:09 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:09.607916 | orchestrator | 2026-04-07 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:12.649240 | orchestrator | 2026-04-07 01:11:12 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:12.652418 | orchestrator | 2026-04-07 01:11:12 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:12.655374 | orchestrator | 2026-04-07 01:11:12 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:12.657665 | orchestrator | 2026-04-07 01:11:12 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:12.657715 | orchestrator | 2026-04-07 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:15.699100 | orchestrator | 2026-04-07 01:11:15 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:15.701264 | orchestrator | 2026-04-07 01:11:15 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:15.703210 | orchestrator | 2026-04-07 01:11:15 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:15.704900 | orchestrator | 2026-04-07 01:11:15 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:15.705003 | orchestrator | 2026-04-07 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:18.736192 | orchestrator | 2026-04-07 01:11:18 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:18.737347 | orchestrator | 2026-04-07 01:11:18 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:18.738504 | orchestrator | 2026-04-07 01:11:18 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:18.739218 | orchestrator | 2026-04-07 01:11:18 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:18.739264 | orchestrator | 2026-04-07 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:21.772856 | orchestrator | 2026-04-07 01:11:21 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:21.776420 | orchestrator | 2026-04-07 01:11:21 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:21.778920 | orchestrator | 2026-04-07 01:11:21 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:21.780472 | orchestrator | 2026-04-07 01:11:21 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:21.780509 | orchestrator | 2026-04-07 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:24.810627 | orchestrator | 2026-04-07 01:11:24 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:24.810694 | orchestrator | 2026-04-07 01:11:24 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:24.811477 | orchestrator | 2026-04-07 01:11:24 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:24.812079 | orchestrator | 2026-04-07 01:11:24 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:24.812120 | orchestrator | 2026-04-07 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:27.850716 | orchestrator | 2026-04-07 01:11:27 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:27.852177 | orchestrator | 2026-04-07 01:11:27 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:27.853820 | orchestrator | 2026-04-07 01:11:27 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:27.855063 | orchestrator | 2026-04-07 01:11:27 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:27.855099 | orchestrator | 2026-04-07 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:30.906420 | orchestrator | 2026-04-07 01:11:30 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:30.906506 | orchestrator | 2026-04-07 01:11:30 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:30.906514 | orchestrator | 2026-04-07 01:11:30 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:30.906519 | orchestrator | 2026-04-07 01:11:30 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:30.906523 | orchestrator | 2026-04-07 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:33.946693 | orchestrator | 2026-04-07 01:11:33 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:33.948342 | orchestrator | 2026-04-07 01:11:33 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:33.948408 | orchestrator | 2026-04-07 01:11:33 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:33.949843 | orchestrator | 2026-04-07 01:11:33 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:33.949877 | orchestrator | 2026-04-07 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:36.990081 | orchestrator | 2026-04-07 01:11:36 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:36.991853 | orchestrator | 2026-04-07 01:11:36 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:36.993851 | orchestrator | 2026-04-07 01:11:36 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state STARTED 2026-04-07 01:11:36.995629 | orchestrator | 2026-04-07 01:11:36 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:36.995683 | orchestrator | 2026-04-07 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:40.036942 | orchestrator | 2026-04-07 01:11:40 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:40.038525 | orchestrator | 2026-04-07 01:11:40 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:40.040969 | orchestrator | 2026-04-07 01:11:40 | INFO  | Task 5108d865-6897-45fd-bb5f-db224cbe1fbb is in state SUCCESS 2026-04-07 01:11:40.042748 | orchestrator | 2026-04-07 01:11:40.042783 | orchestrator | 2026-04-07 01:11:40.042824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:11:40.042835 | orchestrator | 2026-04-07 01:11:40.042847 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:11:40.042860 | orchestrator | Tuesday 07 April 2026 01:11:00 +0000 (0:00:00.362) 0:00:00.362 ********* 2026-04-07 01:11:40.042869 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:11:40.042940 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:11:40.042954 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:11:40.042960 | orchestrator | 2026-04-07 01:11:40.042966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:11:40.042990 | orchestrator | Tuesday 07 April 2026 01:11:00 +0000 (0:00:00.464) 0:00:00.826 ********* 2026-04-07 01:11:40.042997 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-07 01:11:40.043003 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-07 01:11:40.043008 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-07 01:11:40.043014 | orchestrator | 2026-04-07 01:11:40.043022 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-07 01:11:40.043035 | orchestrator | 2026-04-07 01:11:40.043263 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-07 01:11:40.043281 | orchestrator | Tuesday 07 April 2026 01:11:01 +0000 (0:00:00.511) 0:00:01.338 ********* 2026-04-07 01:11:40.043332 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:11:40.043342 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:11:40.043369 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:11:40.043378 | orchestrator | 2026-04-07 01:11:40.043388 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:11:40.043397 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:11:40.043408 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:11:40.043416 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:11:40.043427 | orchestrator | 2026-04-07 01:11:40.043433 | orchestrator | 2026-04-07 01:11:40.043438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:11:40.043443 | orchestrator | Tuesday 07 April 2026 01:11:02 +0000 (0:00:01.092) 0:00:02.430 ********* 2026-04-07 01:11:40.043449 | orchestrator | =============================================================================== 2026-04-07 01:11:40.043454 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.09s 2026-04-07 01:11:40.043459 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-04-07 01:11:40.043465 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-04-07 01:11:40.043470 | orchestrator | 2026-04-07 01:11:40.043475 | orchestrator | 2026-04-07 01:11:40.043489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:11:40.043497 | orchestrator | 2026-04-07 01:11:40.043505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:11:40.043514 | orchestrator | Tuesday 07 April 2026 01:09:58 +0000 (0:00:00.321) 0:00:00.321 ********* 2026-04-07 01:11:40.043523 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:11:40.043532 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:11:40.043541 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:11:40.043550 | orchestrator | 2026-04-07 01:11:40.043560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:11:40.043581 | orchestrator | Tuesday 07 April 2026 01:09:58 +0000 (0:00:00.293) 0:00:00.615 ********* 2026-04-07 01:11:40.043587 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-07 01:11:40.043593 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-07 01:11:40.043598 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-07 01:11:40.043604 | orchestrator | 2026-04-07 01:11:40.043609 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-07 01:11:40.043614 | orchestrator | 2026-04-07 01:11:40.043620 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 01:11:40.043625 | orchestrator | Tuesday 07 April 2026 01:09:59 +0000 (0:00:00.322) 0:00:00.937 ********* 2026-04-07 01:11:40.043630 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:11:40.043636 | orchestrator | 2026-04-07 01:11:40.043641 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-07 01:11:40.043646 | orchestrator | Tuesday 07 April 2026 01:10:00 +0000 (0:00:00.934) 0:00:01.872 ********* 2026-04-07 01:11:40.043652 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-07 01:11:40.043658 | orchestrator | 2026-04-07 01:11:40.043663 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-07 01:11:40.043669 | orchestrator | Tuesday 07 April 2026 01:10:03 +0000 (0:00:03.904) 0:00:05.777 ********* 2026-04-07 01:11:40.043674 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-07 01:11:40.043679 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-07 01:11:40.043685 | orchestrator | 2026-04-07 01:11:40.043690 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-07 01:11:40.043701 | orchestrator | Tuesday 07 April 2026 01:10:10 +0000 (0:00:06.179) 0:00:11.956 ********* 2026-04-07 01:11:40.043706 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:11:40.043712 | orchestrator | 2026-04-07 01:11:40.043717 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-07 01:11:40.043723 | orchestrator | Tuesday 07 April 2026 01:10:13 +0000 (0:00:03.321) 0:00:15.277 ********* 2026-04-07 01:11:40.043737 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-07 01:11:40.043743 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:11:40.043748 | orchestrator | 2026-04-07 01:11:40.043754 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-07 01:11:40.043759 | orchestrator | Tuesday 07 April 2026 01:10:16 +0000 (0:00:03.462) 0:00:18.740 ********* 2026-04-07 01:11:40.043764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:11:40.043770 | orchestrator | 2026-04-07 01:11:40.043775 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-07 01:11:40.043781 | orchestrator | Tuesday 07 April 2026 01:10:20 +0000 (0:00:03.086) 0:00:21.826 ********* 2026-04-07 01:11:40.043786 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-07 01:11:40.043791 | orchestrator | 2026-04-07 01:11:40.043797 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-07 01:11:40.043802 | orchestrator | Tuesday 07 April 2026 01:10:23 +0000 (0:00:03.444) 0:00:25.271 ********* 2026-04-07 01:11:40.043808 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.043813 | orchestrator | 2026-04-07 01:11:40.043818 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-07 01:11:40.043825 | orchestrator | Tuesday 07 April 2026 01:10:26 +0000 (0:00:03.279) 0:00:28.551 ********* 2026-04-07 01:11:40.043834 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.043849 | orchestrator | 2026-04-07 01:11:40.043857 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-07 01:11:40.043866 | orchestrator | Tuesday 07 April 2026 01:10:30 +0000 (0:00:03.460) 0:00:32.011 ********* 2026-04-07 01:11:40.043874 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.043882 | orchestrator | 2026-04-07 01:11:40.043890 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-07 01:11:40.043899 | orchestrator | Tuesday 07 April 2026 01:10:33 +0000 (0:00:03.123) 0:00:35.135 ********* 2026-04-07 01:11:40.043915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.043929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.043946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.043965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.043972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.043980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.043986 | orchestrator | 2026-04-07 01:11:40.043992 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-07 01:11:40.043997 | orchestrator | Tuesday 07 April 2026 01:10:35 +0000 (0:00:01.747) 0:00:36.883 ********* 2026-04-07 01:11:40.044002 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044008 | orchestrator | 2026-04-07 01:11:40.044013 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-07 01:11:40.044018 | orchestrator | Tuesday 07 April 2026 01:10:35 +0000 (0:00:00.105) 0:00:36.988 ********* 2026-04-07 01:11:40.044024 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044034 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:11:40.044039 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:11:40.044044 | orchestrator | 2026-04-07 01:11:40.044050 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-07 01:11:40.044057 | orchestrator | Tuesday 07 April 2026 01:10:35 +0000 (0:00:00.287) 0:00:37.275 ********* 2026-04-07 01:11:40.044066 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:11:40.044079 | orchestrator | 2026-04-07 01:11:40.044089 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-07 01:11:40.044098 | orchestrator | Tuesday 07 April 2026 01:10:36 +0000 (0:00:00.901) 0:00:38.177 ********* 2026-04-07 01:11:40.044107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044182 | orchestrator | 2026-04-07 01:11:40.044188 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-07 01:11:40.044194 | orchestrator | Tuesday 07 April 2026 01:10:38 +0000 (0:00:02.158) 0:00:40.336 ********* 2026-04-07 01:11:40.044199 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:11:40.044204 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:11:40.044210 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:11:40.044215 | orchestrator | 2026-04-07 01:11:40.044221 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 01:11:40.044230 | orchestrator | Tuesday 07 April 2026 01:10:38 +0000 (0:00:00.467) 0:00:40.803 ********* 2026-04-07 01:11:40.044235 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:11:40.044241 | orchestrator | 2026-04-07 01:11:40.044246 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-07 01:11:40.044252 | orchestrator | Tuesday 07 April 2026 01:10:39 +0000 (0:00:00.502) 0:00:41.306 ********* 2026-04-07 01:11:40.044257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044345 | orchestrator | 2026-04-07 01:11:40.044351 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-07 01:11:40.044356 | orchestrator | Tuesday 07 April 2026 01:10:41 +0000 (0:00:02.211) 0:00:43.517 ********* 2026-04-07 01:11:40.044365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044381 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044402 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:11:40.044408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044433 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:11:40.044446 | orchestrator | 2026-04-07 01:11:40.044456 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-07 01:11:40.044465 | orchestrator | Tuesday 07 April 2026 01:10:42 +0000 (0:00:00.940) 0:00:44.457 ********* 2026-04-07 01:11:40.044474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044492 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044530 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:11:40.044543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044561 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:11:40.044570 | orchestrator | 2026-04-07 01:11:40.044578 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-07 01:11:40.044587 | orchestrator | Tuesday 07 April 2026 01:10:43 +0000 (0:00:00.817) 0:00:45.275 ********* 2026-04-07 01:11:40.044602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044680 | orchestrator | 2026-04-07 01:11:40.044686 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-07 01:11:40.044697 | orchestrator | Tuesday 07 April 2026 01:10:45 +0000 (0:00:02.146) 0:00:47.422 ********* 2026-04-07 01:11:40.044703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044752 | orchestrator | 2026-04-07 01:11:40.044757 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-07 01:11:40.044763 | orchestrator | Tuesday 07 April 2026 01:10:51 +0000 (0:00:06.202) 0:00:53.624 ********* 2026-04-07 01:11:40.044771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044783 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044807 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:11:40.044813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 01:11:40.044821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:11:40.044827 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:11:40.044832 | orchestrator | 2026-04-07 01:11:40.044838 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-07 01:11:40.044843 | orchestrator | Tuesday 07 April 2026 01:10:52 +0000 (0:00:01.040) 0:00:54.664 ********* 2026-04-07 01:11:40.044848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 01:11:40.044873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:11:40.044892 | orchestrator | 2026-04-07 01:11:40.044898 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 01:11:40.044903 | orchestrator | Tuesday 07 April 2026 01:10:55 +0000 (0:00:02.474) 0:00:57.139 ********* 2026-04-07 01:11:40.044909 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:11:40.044914 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:11:40.044919 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:11:40.044924 | orchestrator | 2026-04-07 01:11:40.044930 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-07 01:11:40.044946 | orchestrator | Tuesday 07 April 2026 01:10:55 +0000 (0:00:00.207) 0:00:57.347 ********* 2026-04-07 01:11:40.044952 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.044957 | orchestrator | 2026-04-07 01:11:40.044963 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-07 01:11:40.044968 | orchestrator | Tuesday 07 April 2026 01:10:57 +0000 (0:00:01.793) 0:00:59.140 ********* 2026-04-07 01:11:40.044973 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.044979 | orchestrator | 2026-04-07 01:11:40.044984 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-07 01:11:40.044989 | orchestrator | Tuesday 07 April 2026 01:10:59 +0000 (0:00:01.927) 0:01:01.067 ********* 2026-04-07 01:11:40.044998 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.045003 | orchestrator | 2026-04-07 01:11:40.045009 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 01:11:40.045015 | orchestrator | Tuesday 07 April 2026 01:11:16 +0000 (0:00:16.792) 0:01:17.859 ********* 2026-04-07 01:11:40.045024 | orchestrator | 2026-04-07 01:11:40.045033 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 01:11:40.045048 | orchestrator | Tuesday 07 April 2026 01:11:16 +0000 (0:00:00.241) 0:01:18.101 ********* 2026-04-07 01:11:40.045058 | orchestrator | 2026-04-07 01:11:40.045067 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 01:11:40.045075 | orchestrator | Tuesday 07 April 2026 01:11:16 +0000 (0:00:00.063) 0:01:18.165 ********* 2026-04-07 01:11:40.045084 | orchestrator | 2026-04-07 01:11:40.045093 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-07 01:11:40.045102 | orchestrator | Tuesday 07 April 2026 01:11:16 +0000 (0:00:00.066) 0:01:18.231 ********* 2026-04-07 01:11:40.045111 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.045121 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:11:40.045131 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:11:40.045140 | orchestrator | 2026-04-07 01:11:40.045149 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-07 01:11:40.045158 | orchestrator | Tuesday 07 April 2026 01:11:29 +0000 (0:00:12.914) 0:01:31.146 ********* 2026-04-07 01:11:40.045168 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:11:40.045176 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:11:40.045192 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:11:40.045201 | orchestrator | 2026-04-07 01:11:40.045209 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:11:40.045218 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:11:40.045227 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:11:40.045235 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:11:40.045244 | orchestrator | 2026-04-07 01:11:40.045253 | orchestrator | 2026-04-07 01:11:40.045261 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:11:40.045270 | orchestrator | Tuesday 07 April 2026 01:11:38 +0000 (0:00:09.670) 0:01:40.816 ********* 2026-04-07 01:11:40.045279 | orchestrator | =============================================================================== 2026-04-07 01:11:40.045310 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.79s 2026-04-07 01:11:40.045320 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.91s 2026-04-07 01:11:40.045334 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.67s 2026-04-07 01:11:40.045343 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.20s 2026-04-07 01:11:40.045363 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.18s 2026-04-07 01:11:40.045372 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.90s 2026-04-07 01:11:40.045380 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.46s 2026-04-07 01:11:40.045389 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.46s 2026-04-07 01:11:40.045398 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.44s 2026-04-07 01:11:40.045423 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.32s 2026-04-07 01:11:40.045432 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.28s 2026-04-07 01:11:40.045441 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.12s 2026-04-07 01:11:40.045450 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.09s 2026-04-07 01:11:40.045458 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.47s 2026-04-07 01:11:40.045467 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.21s 2026-04-07 01:11:40.045475 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.16s 2026-04-07 01:11:40.045485 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.15s 2026-04-07 01:11:40.045494 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.93s 2026-04-07 01:11:40.045502 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.79s 2026-04-07 01:11:40.045511 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.75s 2026-04-07 01:11:40.045520 | orchestrator | 2026-04-07 01:11:40 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:40.045529 | orchestrator | 2026-04-07 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:43.090844 | orchestrator | 2026-04-07 01:11:43 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:43.092819 | orchestrator | 2026-04-07 01:11:43 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:43.094383 | orchestrator | 2026-04-07 01:11:43 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:43.094441 | orchestrator | 2026-04-07 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:46.132237 | orchestrator | 2026-04-07 01:11:46 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:46.134212 | orchestrator | 2026-04-07 01:11:46 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:46.136256 | orchestrator | 2026-04-07 01:11:46 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:46.136372 | orchestrator | 2026-04-07 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:49.180923 | orchestrator | 2026-04-07 01:11:49 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:49.181065 | orchestrator | 2026-04-07 01:11:49 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:49.182136 | orchestrator | 2026-04-07 01:11:49 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:49.182185 | orchestrator | 2026-04-07 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:52.218584 | orchestrator | 2026-04-07 01:11:52 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:52.220935 | orchestrator | 2026-04-07 01:11:52 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:52.223217 | orchestrator | 2026-04-07 01:11:52 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:52.223310 | orchestrator | 2026-04-07 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:55.262652 | orchestrator | 2026-04-07 01:11:55 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:55.263808 | orchestrator | 2026-04-07 01:11:55 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:55.266371 | orchestrator | 2026-04-07 01:11:55 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:55.266630 | orchestrator | 2026-04-07 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:11:58.319703 | orchestrator | 2026-04-07 01:11:58 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:11:58.320656 | orchestrator | 2026-04-07 01:11:58 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:11:58.322495 | orchestrator | 2026-04-07 01:11:58 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:11:58.322539 | orchestrator | 2026-04-07 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:01.371096 | orchestrator | 2026-04-07 01:12:01 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:01.372450 | orchestrator | 2026-04-07 01:12:01 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:01.374203 | orchestrator | 2026-04-07 01:12:01 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:01.374249 | orchestrator | 2026-04-07 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:04.424586 | orchestrator | 2026-04-07 01:12:04 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:04.426717 | orchestrator | 2026-04-07 01:12:04 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:04.428180 | orchestrator | 2026-04-07 01:12:04 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:04.428222 | orchestrator | 2026-04-07 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:07.478246 | orchestrator | 2026-04-07 01:12:07 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:07.480005 | orchestrator | 2026-04-07 01:12:07 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:07.482725 | orchestrator | 2026-04-07 01:12:07 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:07.482789 | orchestrator | 2026-04-07 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:10.524171 | orchestrator | 2026-04-07 01:12:10 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:10.525042 | orchestrator | 2026-04-07 01:12:10 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:10.525737 | orchestrator | 2026-04-07 01:12:10 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:10.525763 | orchestrator | 2026-04-07 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:13.577101 | orchestrator | 2026-04-07 01:12:13 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:13.577549 | orchestrator | 2026-04-07 01:12:13 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:13.578374 | orchestrator | 2026-04-07 01:12:13 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:13.578414 | orchestrator | 2026-04-07 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:16.605997 | orchestrator | 2026-04-07 01:12:16 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:16.607431 | orchestrator | 2026-04-07 01:12:16 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:16.609043 | orchestrator | 2026-04-07 01:12:16 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:16.609249 | orchestrator | 2026-04-07 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:19.637695 | orchestrator | 2026-04-07 01:12:19 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:19.640172 | orchestrator | 2026-04-07 01:12:19 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:19.646812 | orchestrator | 2026-04-07 01:12:19 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:19.646892 | orchestrator | 2026-04-07 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:22.690321 | orchestrator | 2026-04-07 01:12:22 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:22.690595 | orchestrator | 2026-04-07 01:12:22 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:22.692206 | orchestrator | 2026-04-07 01:12:22 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:22.692256 | orchestrator | 2026-04-07 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:25.739051 | orchestrator | 2026-04-07 01:12:25 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:25.740933 | orchestrator | 2026-04-07 01:12:25 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:25.743757 | orchestrator | 2026-04-07 01:12:25 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:25.743839 | orchestrator | 2026-04-07 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:28.787180 | orchestrator | 2026-04-07 01:12:28 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:28.789242 | orchestrator | 2026-04-07 01:12:28 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:28.791837 | orchestrator | 2026-04-07 01:12:28 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:28.792066 | orchestrator | 2026-04-07 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:31.836445 | orchestrator | 2026-04-07 01:12:31 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:31.837501 | orchestrator | 2026-04-07 01:12:31 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:31.838667 | orchestrator | 2026-04-07 01:12:31 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:31.838801 | orchestrator | 2026-04-07 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:34.878246 | orchestrator | 2026-04-07 01:12:34 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:34.878349 | orchestrator | 2026-04-07 01:12:34 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:34.878624 | orchestrator | 2026-04-07 01:12:34 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:34.878642 | orchestrator | 2026-04-07 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:37.922419 | orchestrator | 2026-04-07 01:12:37 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:37.924085 | orchestrator | 2026-04-07 01:12:37 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:37.926363 | orchestrator | 2026-04-07 01:12:37 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:37.926403 | orchestrator | 2026-04-07 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:40.974340 | orchestrator | 2026-04-07 01:12:40 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:40.975861 | orchestrator | 2026-04-07 01:12:40 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:40.977626 | orchestrator | 2026-04-07 01:12:40 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:40.977689 | orchestrator | 2026-04-07 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:44.017830 | orchestrator | 2026-04-07 01:12:44 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:44.020016 | orchestrator | 2026-04-07 01:12:44 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:44.022842 | orchestrator | 2026-04-07 01:12:44 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:44.022913 | orchestrator | 2026-04-07 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:47.069438 | orchestrator | 2026-04-07 01:12:47 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:47.071646 | orchestrator | 2026-04-07 01:12:47 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:47.073626 | orchestrator | 2026-04-07 01:12:47 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:47.073901 | orchestrator | 2026-04-07 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:50.121827 | orchestrator | 2026-04-07 01:12:50 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:50.123907 | orchestrator | 2026-04-07 01:12:50 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:50.127082 | orchestrator | 2026-04-07 01:12:50 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:50.127154 | orchestrator | 2026-04-07 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:53.163319 | orchestrator | 2026-04-07 01:12:53 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:53.165319 | orchestrator | 2026-04-07 01:12:53 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:53.167219 | orchestrator | 2026-04-07 01:12:53 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state STARTED 2026-04-07 01:12:53.167317 | orchestrator | 2026-04-07 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:56.207966 | orchestrator | 2026-04-07 01:12:56 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:56.210148 | orchestrator | 2026-04-07 01:12:56 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:56.214193 | orchestrator | 2026-04-07 01:12:56 | INFO  | Task 411180cc-0f79-445b-a98e-d81ac7152317 is in state SUCCESS 2026-04-07 01:12:56.216385 | orchestrator | 2026-04-07 01:12:56.216505 | orchestrator | 2026-04-07 01:12:56.216520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:12:56.216526 | orchestrator | 2026-04-07 01:12:56.216550 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-07 01:12:56.216555 | orchestrator | Tuesday 07 April 2026 01:04:13 +0000 (0:00:00.471) 0:00:00.471 ********* 2026-04-07 01:12:56.216559 | orchestrator | changed: [testbed-manager] 2026-04-07 01:12:56.216565 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.216569 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.216573 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.216576 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.216580 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.216584 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.216588 | orchestrator | 2026-04-07 01:12:56.216592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:12:56.216595 | orchestrator | Tuesday 07 April 2026 01:04:13 +0000 (0:00:00.626) 0:00:01.097 ********* 2026-04-07 01:12:56.216599 | orchestrator | changed: [testbed-manager] 2026-04-07 01:12:56.216603 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.216607 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.216611 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.216614 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.216618 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.216622 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.216625 | orchestrator | 2026-04-07 01:12:56.216629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:12:56.216633 | orchestrator | Tuesday 07 April 2026 01:04:14 +0000 (0:00:00.686) 0:00:01.783 ********* 2026-04-07 01:12:56.216706 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-07 01:12:56.216712 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-07 01:12:56.216716 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-07 01:12:56.216720 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-07 01:12:56.216724 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-07 01:12:56.216728 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-07 01:12:56.216731 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-07 01:12:56.216735 | orchestrator | 2026-04-07 01:12:56.216739 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-07 01:12:56.216743 | orchestrator | 2026-04-07 01:12:56.216746 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-07 01:12:56.216750 | orchestrator | Tuesday 07 April 2026 01:04:15 +0000 (0:00:00.733) 0:00:02.517 ********* 2026-04-07 01:12:56.216755 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.216759 | orchestrator | 2026-04-07 01:12:56.216895 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-07 01:12:56.216901 | orchestrator | Tuesday 07 April 2026 01:04:15 +0000 (0:00:00.631) 0:00:03.148 ********* 2026-04-07 01:12:56.216906 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-07 01:12:56.216910 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-07 01:12:56.216914 | orchestrator | 2026-04-07 01:12:56.216918 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-07 01:12:56.216922 | orchestrator | Tuesday 07 April 2026 01:04:19 +0000 (0:00:03.667) 0:00:06.816 ********* 2026-04-07 01:12:56.216926 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 01:12:56.216930 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 01:12:56.216933 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.216937 | orchestrator | 2026-04-07 01:12:56.216941 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-07 01:12:56.216979 | orchestrator | Tuesday 07 April 2026 01:04:23 +0000 (0:00:03.569) 0:00:10.385 ********* 2026-04-07 01:12:56.216985 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.216989 | orchestrator | 2026-04-07 01:12:56.216993 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-07 01:12:56.217003 | orchestrator | Tuesday 07 April 2026 01:04:23 +0000 (0:00:00.859) 0:00:11.244 ********* 2026-04-07 01:12:56.217007 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.217011 | orchestrator | 2026-04-07 01:12:56.217015 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-07 01:12:56.217019 | orchestrator | Tuesday 07 April 2026 01:04:25 +0000 (0:00:01.584) 0:00:12.829 ********* 2026-04-07 01:12:56.217022 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.217026 | orchestrator | 2026-04-07 01:12:56.217030 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 01:12:56.217034 | orchestrator | Tuesday 07 April 2026 01:04:28 +0000 (0:00:02.948) 0:00:15.778 ********* 2026-04-07 01:12:56.217037 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.217041 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217045 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217049 | orchestrator | 2026-04-07 01:12:56.217052 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-07 01:12:56.217065 | orchestrator | Tuesday 07 April 2026 01:04:28 +0000 (0:00:00.340) 0:00:16.119 ********* 2026-04-07 01:12:56.217069 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.217073 | orchestrator | 2026-04-07 01:12:56.217077 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-07 01:12:56.217081 | orchestrator | Tuesday 07 April 2026 01:04:54 +0000 (0:00:25.713) 0:00:41.832 ********* 2026-04-07 01:12:56.217084 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.217088 | orchestrator | 2026-04-07 01:12:56.217092 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 01:12:56.217096 | orchestrator | Tuesday 07 April 2026 01:05:06 +0000 (0:00:12.044) 0:00:53.877 ********* 2026-04-07 01:12:56.217100 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.217103 | orchestrator | 2026-04-07 01:12:56.217107 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 01:12:56.217111 | orchestrator | Tuesday 07 April 2026 01:05:18 +0000 (0:00:11.710) 0:01:05.587 ********* 2026-04-07 01:12:56.217125 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.217129 | orchestrator | 2026-04-07 01:12:56.217132 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-07 01:12:56.217136 | orchestrator | Tuesday 07 April 2026 01:05:19 +0000 (0:00:01.567) 0:01:07.155 ********* 2026-04-07 01:12:56.217140 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.217144 | orchestrator | 2026-04-07 01:12:56.217148 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 01:12:56.217152 | orchestrator | Tuesday 07 April 2026 01:05:20 +0000 (0:00:01.048) 0:01:08.203 ********* 2026-04-07 01:12:56.217156 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.217160 | orchestrator | 2026-04-07 01:12:56.217164 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-07 01:12:56.217168 | orchestrator | Tuesday 07 April 2026 01:05:21 +0000 (0:00:00.712) 0:01:08.916 ********* 2026-04-07 01:12:56.217172 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.217175 | orchestrator | 2026-04-07 01:12:56.217179 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-07 01:12:56.217183 | orchestrator | Tuesday 07 April 2026 01:05:37 +0000 (0:00:15.911) 0:01:24.827 ********* 2026-04-07 01:12:56.217187 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.217191 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217195 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217198 | orchestrator | 2026-04-07 01:12:56.217202 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-07 01:12:56.217206 | orchestrator | 2026-04-07 01:12:56.217210 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-07 01:12:56.217214 | orchestrator | Tuesday 07 April 2026 01:05:37 +0000 (0:00:00.262) 0:01:25.089 ********* 2026-04-07 01:12:56.217221 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.217250 | orchestrator | 2026-04-07 01:12:56.217304 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-07 01:12:56.217310 | orchestrator | Tuesday 07 April 2026 01:05:38 +0000 (0:00:00.644) 0:01:25.734 ********* 2026-04-07 01:12:56.217314 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217318 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217321 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.217351 | orchestrator | 2026-04-07 01:12:56.217355 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-07 01:12:56.217359 | orchestrator | Tuesday 07 April 2026 01:05:40 +0000 (0:00:01.775) 0:01:27.509 ********* 2026-04-07 01:12:56.217363 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217367 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217530 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.217538 | orchestrator | 2026-04-07 01:12:56.217544 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-07 01:12:56.217549 | orchestrator | Tuesday 07 April 2026 01:05:42 +0000 (0:00:02.254) 0:01:29.764 ********* 2026-04-07 01:12:56.217555 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.217561 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217567 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217623 | orchestrator | 2026-04-07 01:12:56.217627 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-07 01:12:56.217631 | orchestrator | Tuesday 07 April 2026 01:05:42 +0000 (0:00:00.498) 0:01:30.263 ********* 2026-04-07 01:12:56.217635 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 01:12:56.217639 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217702 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 01:12:56.217712 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217720 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 01:12:56.217726 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-07 01:12:56.217732 | orchestrator | 2026-04-07 01:12:56.217945 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-07 01:12:56.217955 | orchestrator | Tuesday 07 April 2026 01:05:50 +0000 (0:00:07.611) 0:01:37.874 ********* 2026-04-07 01:12:56.217961 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.217967 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.217973 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.217978 | orchestrator | 2026-04-07 01:12:56.217985 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-07 01:12:56.217990 | orchestrator | Tuesday 07 April 2026 01:05:51 +0000 (0:00:00.767) 0:01:38.642 ********* 2026-04-07 01:12:56.218104 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 01:12:56.218113 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.218119 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 01:12:56.218125 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218131 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 01:12:56.218137 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218143 | orchestrator | 2026-04-07 01:12:56.218170 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-07 01:12:56.218188 | orchestrator | Tuesday 07 April 2026 01:05:53 +0000 (0:00:02.185) 0:01:40.828 ********* 2026-04-07 01:12:56.218195 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218202 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218208 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.218214 | orchestrator | 2026-04-07 01:12:56.218220 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-07 01:12:56.218227 | orchestrator | Tuesday 07 April 2026 01:05:54 +0000 (0:00:00.531) 0:01:41.359 ********* 2026-04-07 01:12:56.218233 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218246 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218250 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.218253 | orchestrator | 2026-04-07 01:12:56.218284 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-07 01:12:56.218288 | orchestrator | Tuesday 07 April 2026 01:05:55 +0000 (0:00:00.994) 0:01:42.354 ********* 2026-04-07 01:12:56.218292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218296 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218331 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.218336 | orchestrator | 2026-04-07 01:12:56.218339 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-07 01:12:56.218343 | orchestrator | Tuesday 07 April 2026 01:05:57 +0000 (0:00:02.744) 0:01:45.098 ********* 2026-04-07 01:12:56.218347 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218508 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218514 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.218521 | orchestrator | 2026-04-07 01:12:56.218528 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 01:12:56.218534 | orchestrator | Tuesday 07 April 2026 01:06:18 +0000 (0:00:20.590) 0:02:05.688 ********* 2026-04-07 01:12:56.218540 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218545 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218551 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.218668 | orchestrator | 2026-04-07 01:12:56.218676 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 01:12:56.218680 | orchestrator | Tuesday 07 April 2026 01:06:29 +0000 (0:00:10.645) 0:02:16.334 ********* 2026-04-07 01:12:56.218683 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218687 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.218691 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218694 | orchestrator | 2026-04-07 01:12:56.218698 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-07 01:12:56.218702 | orchestrator | Tuesday 07 April 2026 01:06:31 +0000 (0:00:02.562) 0:02:18.896 ********* 2026-04-07 01:12:56.218706 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218710 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218714 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.218718 | orchestrator | 2026-04-07 01:12:56.218721 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-07 01:12:56.218725 | orchestrator | Tuesday 07 April 2026 01:06:43 +0000 (0:00:11.527) 0:02:30.424 ********* 2026-04-07 01:12:56.218729 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.218733 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218736 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218740 | orchestrator | 2026-04-07 01:12:56.218744 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-07 01:12:56.218748 | orchestrator | Tuesday 07 April 2026 01:06:44 +0000 (0:00:01.068) 0:02:31.493 ********* 2026-04-07 01:12:56.218752 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.218755 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.218759 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.218763 | orchestrator | 2026-04-07 01:12:56.218766 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-07 01:12:56.218771 | orchestrator | 2026-04-07 01:12:56.218774 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 01:12:56.218778 | orchestrator | Tuesday 07 April 2026 01:06:44 +0000 (0:00:00.262) 0:02:31.755 ********* 2026-04-07 01:12:56.218782 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.218787 | orchestrator | 2026-04-07 01:12:56.218791 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-07 01:12:56.218794 | orchestrator | Tuesday 07 April 2026 01:06:45 +0000 (0:00:00.613) 0:02:32.368 ********* 2026-04-07 01:12:56.218798 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-07 01:12:56.218810 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-07 01:12:56.218814 | orchestrator | 2026-04-07 01:12:56.218818 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-07 01:12:56.218822 | orchestrator | Tuesday 07 April 2026 01:06:47 +0000 (0:00:02.843) 0:02:35.212 ********* 2026-04-07 01:12:56.218826 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-07 01:12:56.218831 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-07 01:12:56.218835 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-07 01:12:56.218839 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-07 01:12:56.218842 | orchestrator | 2026-04-07 01:12:56.218846 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-07 01:12:56.218850 | orchestrator | Tuesday 07 April 2026 01:06:53 +0000 (0:00:05.690) 0:02:40.902 ********* 2026-04-07 01:12:56.218854 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:12:56.218857 | orchestrator | 2026-04-07 01:12:56.218861 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-07 01:12:56.218865 | orchestrator | Tuesday 07 April 2026 01:06:56 +0000 (0:00:03.052) 0:02:43.954 ********* 2026-04-07 01:12:56.218869 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-07 01:12:56.218878 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:12:56.218882 | orchestrator | 2026-04-07 01:12:56.218886 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-07 01:12:56.218889 | orchestrator | Tuesday 07 April 2026 01:07:00 +0000 (0:00:03.828) 0:02:47.783 ********* 2026-04-07 01:12:56.218893 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:12:56.218897 | orchestrator | 2026-04-07 01:12:56.218901 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-07 01:12:56.218906 | orchestrator | Tuesday 07 April 2026 01:07:03 +0000 (0:00:03.202) 0:02:50.986 ********* 2026-04-07 01:12:56.218912 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-07 01:12:56.218918 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-07 01:12:56.218924 | orchestrator | 2026-04-07 01:12:56.218932 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-07 01:12:56.219008 | orchestrator | Tuesday 07 April 2026 01:07:10 +0000 (0:00:06.991) 0:02:57.977 ********* 2026-04-07 01:12:56.219023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219114 | orchestrator | 2026-04-07 01:12:56.219118 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-07 01:12:56.219122 | orchestrator | Tuesday 07 April 2026 01:07:13 +0000 (0:00:03.180) 0:03:01.157 ********* 2026-04-07 01:12:56.219126 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219130 | orchestrator | 2026-04-07 01:12:56.219133 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-07 01:12:56.219137 | orchestrator | Tuesday 07 April 2026 01:07:14 +0000 (0:00:00.265) 0:03:01.423 ********* 2026-04-07 01:12:56.219141 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219145 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219149 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219152 | orchestrator | 2026-04-07 01:12:56.219156 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-07 01:12:56.219160 | orchestrator | Tuesday 07 April 2026 01:07:14 +0000 (0:00:00.625) 0:03:02.049 ********* 2026-04-07 01:12:56.219164 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:12:56.219168 | orchestrator | 2026-04-07 01:12:56.219172 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-07 01:12:56.219175 | orchestrator | Tuesday 07 April 2026 01:07:16 +0000 (0:00:01.471) 0:03:03.520 ********* 2026-04-07 01:12:56.219179 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219183 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219187 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219190 | orchestrator | 2026-04-07 01:12:56.219194 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 01:12:56.219198 | orchestrator | Tuesday 07 April 2026 01:07:16 +0000 (0:00:00.297) 0:03:03.817 ********* 2026-04-07 01:12:56.219202 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.219206 | orchestrator | 2026-04-07 01:12:56.219210 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-07 01:12:56.219213 | orchestrator | Tuesday 07 April 2026 01:07:17 +0000 (0:00:00.923) 0:03:04.741 ********* 2026-04-07 01:12:56.219220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219447 | orchestrator | 2026-04-07 01:12:56.219451 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-07 01:12:56.219456 | orchestrator | Tuesday 07 April 2026 01:07:20 +0000 (0:00:03.133) 0:03:07.875 ********* 2026-04-07 01:12:56.219465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219474 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219490 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219529 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219533 | orchestrator | 2026-04-07 01:12:56.219537 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-07 01:12:56.219541 | orchestrator | Tuesday 07 April 2026 01:07:21 +0000 (0:00:00.804) 0:03:08.680 ********* 2026-04-07 01:12:56.219545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219557 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219587 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219599 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219602 | orchestrator | 2026-04-07 01:12:56.219606 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-07 01:12:56.219610 | orchestrator | Tuesday 07 April 2026 01:07:23 +0000 (0:00:01.866) 0:03:10.546 ********* 2026-04-07 01:12:56.219629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219683 | orchestrator | 2026-04-07 01:12:56.219688 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-07 01:12:56.219692 | orchestrator | Tuesday 07 April 2026 01:07:25 +0000 (0:00:02.719) 0:03:13.266 ********* 2026-04-07 01:12:56.219698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219747 | orchestrator | 2026-04-07 01:12:56.219751 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-07 01:12:56.219755 | orchestrator | Tuesday 07 April 2026 01:07:35 +0000 (0:00:09.323) 0:03:22.589 ********* 2026-04-07 01:12:56.219762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219802 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 01:12:56.219816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.219820 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219825 | orchestrator | 2026-04-07 01:12:56.219829 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-07 01:12:56.219834 | orchestrator | Tuesday 07 April 2026 01:07:35 +0000 (0:00:00.633) 0:03:23.223 ********* 2026-04-07 01:12:56.219838 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.219842 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.219869 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.219874 | orchestrator | 2026-04-07 01:12:56.219892 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-07 01:12:56.219898 | orchestrator | Tuesday 07 April 2026 01:07:38 +0000 (0:00:02.688) 0:03:25.912 ********* 2026-04-07 01:12:56.219902 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.219906 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.219910 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.219914 | orchestrator | 2026-04-07 01:12:56.219917 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-07 01:12:56.219921 | orchestrator | Tuesday 07 April 2026 01:07:39 +0000 (0:00:00.646) 0:03:26.558 ********* 2026-04-07 01:12:56.219926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 01:12:56.219961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.219969 | orchestrator | 2026-04-07 01:12:56.219973 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 01:12:56.219976 | orchestrator | Tuesday 07 April 2026 01:07:42 +0000 (0:00:03.092) 0:03:29.650 ********* 2026-04-07 01:12:56.219980 | orchestrator | 2026-04-07 01:12:56.219984 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 01:12:56.219992 | orchestrator | Tuesday 07 April 2026 01:07:42 +0000 (0:00:00.249) 0:03:29.900 ********* 2026-04-07 01:12:56.219995 | orchestrator | 2026-04-07 01:12:56.219999 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 01:12:56.220003 | orchestrator | Tuesday 07 April 2026 01:07:42 +0000 (0:00:00.299) 0:03:30.199 ********* 2026-04-07 01:12:56.220007 | orchestrator | 2026-04-07 01:12:56.220010 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-07 01:12:56.220014 | orchestrator | Tuesday 07 April 2026 01:07:43 +0000 (0:00:00.785) 0:03:30.985 ********* 2026-04-07 01:12:56.220018 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.220022 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.220025 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.220029 | orchestrator | 2026-04-07 01:12:56.220033 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-07 01:12:56.220037 | orchestrator | Tuesday 07 April 2026 01:08:06 +0000 (0:00:22.665) 0:03:53.650 ********* 2026-04-07 01:12:56.220041 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.220044 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.220048 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.220052 | orchestrator | 2026-04-07 01:12:56.220056 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-07 01:12:56.220059 | orchestrator | 2026-04-07 01:12:56.220063 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 01:12:56.220067 | orchestrator | Tuesday 07 April 2026 01:08:19 +0000 (0:00:12.789) 0:04:06.439 ********* 2026-04-07 01:12:56.220071 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.220075 | orchestrator | 2026-04-07 01:12:56.220079 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 01:12:56.220083 | orchestrator | Tuesday 07 April 2026 01:08:20 +0000 (0:00:01.711) 0:04:08.151 ********* 2026-04-07 01:12:56.220086 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.220090 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.220097 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.220101 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220104 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220108 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220112 | orchestrator | 2026-04-07 01:12:56.220116 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-07 01:12:56.220120 | orchestrator | Tuesday 07 April 2026 01:08:21 +0000 (0:00:00.865) 0:04:09.017 ********* 2026-04-07 01:12:56.220123 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220127 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220131 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220135 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:12:56.220139 | orchestrator | 2026-04-07 01:12:56.220142 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 01:12:56.220158 | orchestrator | Tuesday 07 April 2026 01:08:22 +0000 (0:00:01.058) 0:04:10.075 ********* 2026-04-07 01:12:56.220163 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-07 01:12:56.220167 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-07 01:12:56.220170 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-07 01:12:56.220174 | orchestrator | 2026-04-07 01:12:56.220178 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 01:12:56.220182 | orchestrator | Tuesday 07 April 2026 01:08:24 +0000 (0:00:01.604) 0:04:11.679 ********* 2026-04-07 01:12:56.220186 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-07 01:12:56.220190 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-07 01:12:56.220193 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-07 01:12:56.220201 | orchestrator | 2026-04-07 01:12:56.220204 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 01:12:56.220208 | orchestrator | Tuesday 07 April 2026 01:08:25 +0000 (0:00:01.159) 0:04:12.839 ********* 2026-04-07 01:12:56.220212 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-07 01:12:56.220216 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.220220 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-07 01:12:56.220223 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.220227 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-07 01:12:56.220231 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.220234 | orchestrator | 2026-04-07 01:12:56.220238 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-07 01:12:56.220242 | orchestrator | Tuesday 07 April 2026 01:08:26 +0000 (0:00:00.794) 0:04:13.633 ********* 2026-04-07 01:12:56.220246 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 01:12:56.220250 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 01:12:56.220254 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220304 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 01:12:56.220310 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 01:12:56.220316 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220322 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 01:12:56.220328 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 01:12:56.220336 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220342 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 01:12:56.220348 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 01:12:56.220354 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 01:12:56.220360 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 01:12:56.220366 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 01:12:56.220371 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 01:12:56.220375 | orchestrator | 2026-04-07 01:12:56.220379 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-07 01:12:56.220383 | orchestrator | Tuesday 07 April 2026 01:08:27 +0000 (0:00:01.126) 0:04:14.760 ********* 2026-04-07 01:12:56.220386 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220390 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220394 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.220398 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220402 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.220406 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.220409 | orchestrator | 2026-04-07 01:12:56.220413 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-07 01:12:56.220417 | orchestrator | Tuesday 07 April 2026 01:08:28 +0000 (0:00:01.347) 0:04:16.107 ********* 2026-04-07 01:12:56.220421 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220425 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220428 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220432 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.220436 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.220439 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.220443 | orchestrator | 2026-04-07 01:12:56.220447 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-07 01:12:56.220451 | orchestrator | Tuesday 07 April 2026 01:08:30 +0000 (0:00:01.767) 0:04:17.875 ********* 2026-04-07 01:12:56.220459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220585 | orchestrator | 2026-04-07 01:12:56.220589 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 01:12:56.220593 | orchestrator | Tuesday 07 April 2026 01:08:34 +0000 (0:00:03.563) 0:04:21.438 ********* 2026-04-07 01:12:56.220597 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:12:56.220602 | orchestrator | 2026-04-07 01:12:56.220606 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-07 01:12:56.220610 | orchestrator | Tuesday 07 April 2026 01:08:36 +0000 (0:00:01.915) 0:04:23.354 ********* 2026-04-07 01:12:56.220614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.220721 | orchestrator | 2026-04-07 01:12:56.220725 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-07 01:12:56.220729 | orchestrator | Tuesday 07 April 2026 01:08:40 +0000 (0:00:04.326) 0:04:27.680 ********* 2026-04-07 01:12:56.220745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.220751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.220755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220759 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.220763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.220770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.220790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220795 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.220799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.220803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.220808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220816 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.220820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.220827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.220845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220852 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.220861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220870 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.220877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.220884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220898 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.220905 | orchestrator | 2026-04-07 01:12:56.220910 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-07 01:12:56.220916 | orchestrator | Tuesday 07 April 2026 01:08:41 +0000 (0:00:01.274) 0:04:28.954 ********* 2026-04-07 01:12:56.220922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.220933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.220958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220965 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.220972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.220979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.220990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.220997 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.221011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.221036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.221043 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.221060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.221066 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.221072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.221078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.221084 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.221094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.221116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.221121 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.221125 | orchestrator | 2026-04-07 01:12:56.221129 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 01:12:56.221133 | orchestrator | Tuesday 07 April 2026 01:08:43 +0000 (0:00:01.878) 0:04:30.833 ********* 2026-04-07 01:12:56.221136 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.221140 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.221144 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.221148 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:12:56.221156 | orchestrator | 2026-04-07 01:12:56.221159 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-07 01:12:56.221163 | orchestrator | Tuesday 07 April 2026 01:08:44 +0000 (0:00:00.842) 0:04:31.676 ********* 2026-04-07 01:12:56.221167 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:12:56.221171 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:12:56.221175 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:12:56.221178 | orchestrator | 2026-04-07 01:12:56.221182 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-07 01:12:56.221186 | orchestrator | Tuesday 07 April 2026 01:08:45 +0000 (0:00:00.927) 0:04:32.603 ********* 2026-04-07 01:12:56.221190 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:12:56.221195 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:12:56.221215 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:12:56.221230 | orchestrator | 2026-04-07 01:12:56.221236 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-07 01:12:56.221243 | orchestrator | Tuesday 07 April 2026 01:08:46 +0000 (0:00:01.607) 0:04:34.211 ********* 2026-04-07 01:12:56.221249 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:12:56.221273 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:12:56.221280 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:12:56.221287 | orchestrator | 2026-04-07 01:12:56.221293 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-07 01:12:56.221299 | orchestrator | Tuesday 07 April 2026 01:08:47 +0000 (0:00:00.852) 0:04:35.063 ********* 2026-04-07 01:12:56.221304 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:12:56.221310 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:12:56.221316 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:12:56.221322 | orchestrator | 2026-04-07 01:12:56.221328 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-07 01:12:56.221334 | orchestrator | Tuesday 07 April 2026 01:08:48 +0000 (0:00:00.714) 0:04:35.778 ********* 2026-04-07 01:12:56.221341 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 01:12:56.221347 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 01:12:56.221353 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 01:12:56.221360 | orchestrator | 2026-04-07 01:12:56.221365 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-07 01:12:56.221369 | orchestrator | Tuesday 07 April 2026 01:08:49 +0000 (0:00:01.067) 0:04:36.845 ********* 2026-04-07 01:12:56.221373 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 01:12:56.221377 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 01:12:56.221381 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 01:12:56.221384 | orchestrator | 2026-04-07 01:12:56.221388 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-07 01:12:56.221392 | orchestrator | Tuesday 07 April 2026 01:08:50 +0000 (0:00:01.200) 0:04:38.045 ********* 2026-04-07 01:12:56.221396 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 01:12:56.221399 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 01:12:56.221403 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 01:12:56.221407 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-07 01:12:56.221411 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-07 01:12:56.221415 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-07 01:12:56.221418 | orchestrator | 2026-04-07 01:12:56.221422 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-07 01:12:56.221426 | orchestrator | Tuesday 07 April 2026 01:08:54 +0000 (0:00:03.502) 0:04:41.548 ********* 2026-04-07 01:12:56.221430 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221433 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.221443 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221447 | orchestrator | 2026-04-07 01:12:56.221451 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-07 01:12:56.221455 | orchestrator | Tuesday 07 April 2026 01:08:54 +0000 (0:00:00.302) 0:04:41.850 ********* 2026-04-07 01:12:56.221465 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221469 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.221472 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221476 | orchestrator | 2026-04-07 01:12:56.221480 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-07 01:12:56.221483 | orchestrator | Tuesday 07 April 2026 01:08:54 +0000 (0:00:00.300) 0:04:42.151 ********* 2026-04-07 01:12:56.221487 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.221491 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.221495 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.221499 | orchestrator | 2026-04-07 01:12:56.221502 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-07 01:12:56.221506 | orchestrator | Tuesday 07 April 2026 01:08:56 +0000 (0:00:01.362) 0:04:43.513 ********* 2026-04-07 01:12:56.221534 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 01:12:56.221541 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 01:12:56.221547 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 01:12:56.221555 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 01:12:56.221564 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 01:12:56.221570 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 01:12:56.221576 | orchestrator | 2026-04-07 01:12:56.221582 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-07 01:12:56.221588 | orchestrator | Tuesday 07 April 2026 01:08:59 +0000 (0:00:03.376) 0:04:46.890 ********* 2026-04-07 01:12:56.221595 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 01:12:56.221601 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 01:12:56.221607 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 01:12:56.221613 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 01:12:56.221619 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.221625 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 01:12:56.221629 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.221634 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 01:12:56.221640 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.221646 | orchestrator | 2026-04-07 01:12:56.221652 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-07 01:12:56.221658 | orchestrator | Tuesday 07 April 2026 01:09:03 +0000 (0:00:03.905) 0:04:50.796 ********* 2026-04-07 01:12:56.221665 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.221669 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.221673 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.221677 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:12:56.221681 | orchestrator | 2026-04-07 01:12:56.221684 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-07 01:12:56.221688 | orchestrator | Tuesday 07 April 2026 01:09:05 +0000 (0:00:01.812) 0:04:52.608 ********* 2026-04-07 01:12:56.221692 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:12:56.221701 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:12:56.221705 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:12:56.221709 | orchestrator | 2026-04-07 01:12:56.221712 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-07 01:12:56.221716 | orchestrator | Tuesday 07 April 2026 01:09:06 +0000 (0:00:00.953) 0:04:53.561 ********* 2026-04-07 01:12:56.221720 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221724 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.221728 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221731 | orchestrator | 2026-04-07 01:12:56.221735 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-07 01:12:56.221739 | orchestrator | Tuesday 07 April 2026 01:09:06 +0000 (0:00:00.274) 0:04:53.835 ********* 2026-04-07 01:12:56.221743 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221747 | orchestrator | 2026-04-07 01:12:56.221751 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-07 01:12:56.221754 | orchestrator | Tuesday 07 April 2026 01:09:06 +0000 (0:00:00.135) 0:04:53.971 ********* 2026-04-07 01:12:56.221758 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221762 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.221765 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221769 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.221773 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.221777 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.221780 | orchestrator | 2026-04-07 01:12:56.221784 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-07 01:12:56.221788 | orchestrator | Tuesday 07 April 2026 01:09:07 +0000 (0:00:00.854) 0:04:54.825 ********* 2026-04-07 01:12:56.221792 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:12:56.221795 | orchestrator | 2026-04-07 01:12:56.221799 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-07 01:12:56.221803 | orchestrator | Tuesday 07 April 2026 01:09:08 +0000 (0:00:01.076) 0:04:55.901 ********* 2026-04-07 01:12:56.221807 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.221810 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.221814 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.221818 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.221826 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.221830 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.221833 | orchestrator | 2026-04-07 01:12:56.221837 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-07 01:12:56.221841 | orchestrator | Tuesday 07 April 2026 01:09:09 +0000 (0:00:00.461) 0:04:56.363 ********* 2026-04-07 01:12:56.221853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.221991 | orchestrator | 2026-04-07 01:12:56.221997 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-07 01:12:56.222003 | orchestrator | Tuesday 07 April 2026 01:09:12 +0000 (0:00:03.843) 0:05:00.206 ********* 2026-04-07 01:12:56.222009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.222064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.222074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.222083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.222092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.222099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.222105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.222154 | orchestrator | 2026-04-07 01:12:56.222158 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-07 01:12:56.222162 | orchestrator | Tuesday 07 April 2026 01:09:19 +0000 (0:00:06.566) 0:05:06.772 ********* 2026-04-07 01:12:56.222169 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.222173 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.222177 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.222181 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222187 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222191 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222195 | orchestrator | 2026-04-07 01:12:56.222198 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-07 01:12:56.222202 | orchestrator | Tuesday 07 April 2026 01:09:20 +0000 (0:00:01.398) 0:05:08.171 ********* 2026-04-07 01:12:56.222206 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 01:12:56.222210 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 01:12:56.222213 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 01:12:56.222217 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 01:12:56.222221 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 01:12:56.222225 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 01:12:56.222229 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 01:12:56.222233 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222236 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 01:12:56.222240 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 01:12:56.222248 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222251 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 01:12:56.222269 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 01:12:56.222275 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 01:12:56.222282 | orchestrator | 2026-04-07 01:12:56.222287 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-07 01:12:56.222291 | orchestrator | Tuesday 07 April 2026 01:09:24 +0000 (0:00:04.098) 0:05:12.270 ********* 2026-04-07 01:12:56.222295 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.222298 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.222302 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.222306 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222310 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222313 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222317 | orchestrator | 2026-04-07 01:12:56.222321 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-07 01:12:56.222324 | orchestrator | Tuesday 07 April 2026 01:09:25 +0000 (0:00:00.863) 0:05:13.133 ********* 2026-04-07 01:12:56.222328 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 01:12:56.222333 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 01:12:56.222336 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222340 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 01:12:56.222344 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 01:12:56.222348 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 01:12:56.222367 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222373 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222379 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222385 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 01:12:56.222391 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222408 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222421 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 01:12:56.222427 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222433 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222438 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222444 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222450 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222460 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222466 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 01:12:56.222472 | orchestrator | 2026-04-07 01:12:56.222478 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-07 01:12:56.222484 | orchestrator | Tuesday 07 April 2026 01:09:31 +0000 (0:00:05.704) 0:05:18.837 ********* 2026-04-07 01:12:56.222490 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 01:12:56.222497 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 01:12:56.222503 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 01:12:56.222509 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 01:12:56.222515 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 01:12:56.222521 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:12:56.222527 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:12:56.222533 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 01:12:56.222539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 01:12:56.222545 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 01:12:56.222551 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 01:12:56.222556 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 01:12:56.222562 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222568 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 01:12:56.222574 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:12:56.222586 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:12:56.222592 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 01:12:56.222598 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222604 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 01:12:56.222610 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 01:12:56.222615 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222622 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:12:56.222628 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:12:56.222634 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 01:12:56.222640 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:12:56.222647 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:12:56.222653 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 01:12:56.222659 | orchestrator | 2026-04-07 01:12:56.222665 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-07 01:12:56.222671 | orchestrator | Tuesday 07 April 2026 01:09:37 +0000 (0:00:06.399) 0:05:25.238 ********* 2026-04-07 01:12:56.222677 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.222683 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.222690 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.222696 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222702 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222708 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222714 | orchestrator | 2026-04-07 01:12:56.222721 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-07 01:12:56.222728 | orchestrator | Tuesday 07 April 2026 01:09:38 +0000 (0:00:00.731) 0:05:25.970 ********* 2026-04-07 01:12:56.222734 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.222740 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.222746 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.222752 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222759 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222769 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222776 | orchestrator | 2026-04-07 01:12:56.222781 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-07 01:12:56.222788 | orchestrator | Tuesday 07 April 2026 01:09:39 +0000 (0:00:00.953) 0:05:26.923 ********* 2026-04-07 01:12:56.222794 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222800 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222807 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.222813 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222819 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.222825 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.222831 | orchestrator | 2026-04-07 01:12:56.222837 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-07 01:12:56.222844 | orchestrator | Tuesday 07 April 2026 01:09:42 +0000 (0:00:02.384) 0:05:29.308 ********* 2026-04-07 01:12:56.222850 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.222860 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.222867 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.222873 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.222879 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.222885 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.222891 | orchestrator | 2026-04-07 01:12:56.222908 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-07 01:12:56.222915 | orchestrator | Tuesday 07 April 2026 01:09:44 +0000 (0:00:02.405) 0:05:31.714 ********* 2026-04-07 01:12:56.222928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.222936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.222943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.222950 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.222961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.222968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.222980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.222991 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.222998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.223003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.223008 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 01:12:56.223019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.223023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 01:12:56.223036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.223042 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.223059 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.223065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 01:12:56.223071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 01:12:56.223077 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223083 | orchestrator | 2026-04-07 01:12:56.223088 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-07 01:12:56.223094 | orchestrator | Tuesday 07 April 2026 01:09:45 +0000 (0:00:01.383) 0:05:33.098 ********* 2026-04-07 01:12:56.223099 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-07 01:12:56.223106 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223111 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-07 01:12:56.223116 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223122 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223128 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-07 01:12:56.223133 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223139 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.223144 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-07 01:12:56.223150 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223163 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.223169 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-07 01:12:56.223179 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223185 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223191 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223198 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-07 01:12:56.223204 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-07 01:12:56.223210 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223216 | orchestrator | 2026-04-07 01:12:56.223220 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-07 01:12:56.223223 | orchestrator | Tuesday 07 April 2026 01:09:46 +0000 (0:00:00.646) 0:05:33.744 ********* 2026-04-07 01:12:56.223236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 01:12:56.223391 | orchestrator | 2026-04-07 01:12:56.223394 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 01:12:56.223398 | orchestrator | Tuesday 07 April 2026 01:09:49 +0000 (0:00:02.842) 0:05:36.586 ********* 2026-04-07 01:12:56.223402 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223406 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.223410 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.223414 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223417 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223421 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223425 | orchestrator | 2026-04-07 01:12:56.223429 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223436 | orchestrator | Tuesday 07 April 2026 01:09:49 +0000 (0:00:00.668) 0:05:37.255 ********* 2026-04-07 01:12:56.223439 | orchestrator | 2026-04-07 01:12:56.223443 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223447 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.128) 0:05:37.383 ********* 2026-04-07 01:12:56.223451 | orchestrator | 2026-04-07 01:12:56.223455 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223458 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.132) 0:05:37.516 ********* 2026-04-07 01:12:56.223462 | orchestrator | 2026-04-07 01:12:56.223466 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223470 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.141) 0:05:37.657 ********* 2026-04-07 01:12:56.223474 | orchestrator | 2026-04-07 01:12:56.223477 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223481 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.131) 0:05:37.789 ********* 2026-04-07 01:12:56.223485 | orchestrator | 2026-04-07 01:12:56.223489 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 01:12:56.223492 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.278) 0:05:38.067 ********* 2026-04-07 01:12:56.223496 | orchestrator | 2026-04-07 01:12:56.223500 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-07 01:12:56.223504 | orchestrator | Tuesday 07 April 2026 01:09:50 +0000 (0:00:00.129) 0:05:38.197 ********* 2026-04-07 01:12:56.223507 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.223511 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.223520 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.223524 | orchestrator | 2026-04-07 01:12:56.223528 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-07 01:12:56.223532 | orchestrator | Tuesday 07 April 2026 01:09:59 +0000 (0:00:09.079) 0:05:47.277 ********* 2026-04-07 01:12:56.223536 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.223539 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.223543 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.223547 | orchestrator | 2026-04-07 01:12:56.223551 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-07 01:12:56.223554 | orchestrator | Tuesday 07 April 2026 01:10:13 +0000 (0:00:13.613) 0:06:00.891 ********* 2026-04-07 01:12:56.223558 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.223562 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.223566 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.223569 | orchestrator | 2026-04-07 01:12:56.223576 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-07 01:12:56.223580 | orchestrator | Tuesday 07 April 2026 01:10:49 +0000 (0:00:35.974) 0:06:36.865 ********* 2026-04-07 01:12:56.223583 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.223587 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.223591 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.223594 | orchestrator | 2026-04-07 01:12:56.223600 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-07 01:12:56.223606 | orchestrator | Tuesday 07 April 2026 01:11:20 +0000 (0:00:30.876) 0:07:07.742 ********* 2026-04-07 01:12:56.223612 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.223617 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.223623 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.223629 | orchestrator | 2026-04-07 01:12:56.223635 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-07 01:12:56.223641 | orchestrator | Tuesday 07 April 2026 01:11:22 +0000 (0:00:01.809) 0:07:09.552 ********* 2026-04-07 01:12:56.223647 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.223653 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.223659 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.223663 | orchestrator | 2026-04-07 01:12:56.223673 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-07 01:12:56.223679 | orchestrator | Tuesday 07 April 2026 01:11:23 +0000 (0:00:00.801) 0:07:10.353 ********* 2026-04-07 01:12:56.223685 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:12:56.223690 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:12:56.223696 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:12:56.223702 | orchestrator | 2026-04-07 01:12:56.223708 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-07 01:12:56.223714 | orchestrator | Tuesday 07 April 2026 01:11:46 +0000 (0:00:23.730) 0:07:34.083 ********* 2026-04-07 01:12:56.223719 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223724 | orchestrator | 2026-04-07 01:12:56.223730 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-07 01:12:56.223736 | orchestrator | Tuesday 07 April 2026 01:11:47 +0000 (0:00:00.290) 0:07:34.374 ********* 2026-04-07 01:12:56.223742 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223748 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.223754 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223760 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223767 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223771 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-07 01:12:56.223775 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 01:12:56.223778 | orchestrator | 2026-04-07 01:12:56.223782 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-07 01:12:56.223786 | orchestrator | Tuesday 07 April 2026 01:12:08 +0000 (0:00:21.377) 0:07:55.751 ********* 2026-04-07 01:12:56.223790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223793 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223797 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.223801 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.223804 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223808 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223812 | orchestrator | 2026-04-07 01:12:56.223816 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-07 01:12:56.223819 | orchestrator | Tuesday 07 April 2026 01:12:16 +0000 (0:00:08.228) 0:08:03.980 ********* 2026-04-07 01:12:56.223823 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.223827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.223830 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.223834 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.223838 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.223842 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-07 01:12:56.223845 | orchestrator | 2026-04-07 01:12:56.223849 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 01:12:56.223854 | orchestrator | Tuesday 07 April 2026 01:12:20 +0000 (0:00:03.336) 0:08:07.316 ********* 2026-04-07 01:12:56.223860 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 01:12:56.223866 | orchestrator | 2026-04-07 01:12:56.223872 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 01:12:56.223877 | orchestrator | Tuesday 07 April 2026 01:12:32 +0000 (0:00:12.949) 0:08:20.266 ********* 2026-04-07 01:12:56.223883 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 01:12:56.223889 | orchestrator | 2026-04-07 01:12:56.223894 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-07 01:12:56.223900 | orchestrator | Tuesday 07 April 2026 01:12:34 +0000 (0:00:01.737) 0:08:22.004 ********* 2026-04-07 01:12:56.223905 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.223910 | orchestrator | 2026-04-07 01:12:56.223916 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-07 01:12:56.223927 | orchestrator | Tuesday 07 April 2026 01:12:35 +0000 (0:00:01.216) 0:08:23.220 ********* 2026-04-07 01:12:56.223936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 01:12:56.223942 | orchestrator | 2026-04-07 01:12:56.223947 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-07 01:12:56.223952 | orchestrator | Tuesday 07 April 2026 01:12:47 +0000 (0:00:11.466) 0:08:34.687 ********* 2026-04-07 01:12:56.223958 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:12:56.223963 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:12:56.223968 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:12:56.223974 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:12:56.223980 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:12:56.223985 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:12:56.223992 | orchestrator | 2026-04-07 01:12:56.223997 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-07 01:12:56.224003 | orchestrator | 2026-04-07 01:12:56.224009 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-07 01:12:56.224020 | orchestrator | Tuesday 07 April 2026 01:12:49 +0000 (0:00:01.624) 0:08:36.311 ********* 2026-04-07 01:12:56.224024 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:12:56.224028 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:12:56.224032 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:12:56.224036 | orchestrator | 2026-04-07 01:12:56.224040 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-07 01:12:56.224043 | orchestrator | 2026-04-07 01:12:56.224047 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-07 01:12:56.224051 | orchestrator | Tuesday 07 April 2026 01:12:50 +0000 (0:00:01.169) 0:08:37.481 ********* 2026-04-07 01:12:56.224055 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.224059 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.224062 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.224066 | orchestrator | 2026-04-07 01:12:56.224070 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-07 01:12:56.224074 | orchestrator | 2026-04-07 01:12:56.224077 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-07 01:12:56.224081 | orchestrator | Tuesday 07 April 2026 01:12:50 +0000 (0:00:00.480) 0:08:37.962 ********* 2026-04-07 01:12:56.224085 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-07 01:12:56.224089 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-07 01:12:56.224093 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224097 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-07 01:12:56.224101 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-07 01:12:56.224105 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224108 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-07 01:12:56.224112 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-07 01:12:56.224116 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224119 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-07 01:12:56.224123 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-07 01:12:56.224127 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224131 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:12:56.224136 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-07 01:12:56.224142 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-07 01:12:56.224147 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224154 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-07 01:12:56.224160 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-07 01:12:56.224166 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224179 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:12:56.224185 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-07 01:12:56.224191 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-07 01:12:56.224198 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224204 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-07 01:12:56.224210 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-07 01:12:56.224216 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224222 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:12:56.224230 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-07 01:12:56.224234 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-07 01:12:56.224238 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224242 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-07 01:12:56.224246 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-07 01:12:56.224249 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.224253 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224280 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.224284 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-07 01:12:56.224288 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-07 01:12:56.224292 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-07 01:12:56.224296 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-07 01:12:56.224299 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-07 01:12:56.224303 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-07 01:12:56.224307 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.224311 | orchestrator | 2026-04-07 01:12:56.224314 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-07 01:12:56.224318 | orchestrator | 2026-04-07 01:12:56.224325 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-07 01:12:56.224329 | orchestrator | Tuesday 07 April 2026 01:12:51 +0000 (0:00:01.209) 0:08:39.171 ********* 2026-04-07 01:12:56.224333 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-07 01:12:56.224337 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-07 01:12:56.224340 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.224344 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-07 01:12:56.224348 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-07 01:12:56.224351 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.224355 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-07 01:12:56.224359 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-07 01:12:56.224363 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.224366 | orchestrator | 2026-04-07 01:12:56.224374 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-07 01:12:56.224378 | orchestrator | 2026-04-07 01:12:56.224382 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-07 01:12:56.224386 | orchestrator | Tuesday 07 April 2026 01:12:52 +0000 (0:00:00.756) 0:08:39.927 ********* 2026-04-07 01:12:56.224389 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.224393 | orchestrator | 2026-04-07 01:12:56.224397 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-07 01:12:56.224401 | orchestrator | 2026-04-07 01:12:56.224404 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-07 01:12:56.224408 | orchestrator | Tuesday 07 April 2026 01:12:53 +0000 (0:00:00.655) 0:08:40.583 ********* 2026-04-07 01:12:56.224416 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:12:56.224420 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:12:56.224424 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:12:56.224428 | orchestrator | 2026-04-07 01:12:56.224431 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:12:56.224435 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:12:56.224442 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-07 01:12:56.224449 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-07 01:12:56.224455 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-07 01:12:56.224460 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-07 01:12:56.224469 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-07 01:12:56.224479 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-07 01:12:56.224485 | orchestrator | 2026-04-07 01:12:56.224491 | orchestrator | 2026-04-07 01:12:56.224497 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:12:56.224502 | orchestrator | Tuesday 07 April 2026 01:12:53 +0000 (0:00:00.559) 0:08:41.142 ********* 2026-04-07 01:12:56.224507 | orchestrator | =============================================================================== 2026-04-07 01:12:56.224513 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 35.97s 2026-04-07 01:12:56.224519 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.88s 2026-04-07 01:12:56.224525 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 25.71s 2026-04-07 01:12:56.224531 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.73s 2026-04-07 01:12:56.224538 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.67s 2026-04-07 01:12:56.224543 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.38s 2026-04-07 01:12:56.224549 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.59s 2026-04-07 01:12:56.224556 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.91s 2026-04-07 01:12:56.224561 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.61s 2026-04-07 01:12:56.224567 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.95s 2026-04-07 01:12:56.224572 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.79s 2026-04-07 01:12:56.224578 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.04s 2026-04-07 01:12:56.224584 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.71s 2026-04-07 01:12:56.224590 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.53s 2026-04-07 01:12:56.224596 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.47s 2026-04-07 01:12:56.224602 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.65s 2026-04-07 01:12:56.224608 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.32s 2026-04-07 01:12:56.224618 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.08s 2026-04-07 01:12:56.224622 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.23s 2026-04-07 01:12:56.224630 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.61s 2026-04-07 01:12:56.224634 | orchestrator | 2026-04-07 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:12:59.261576 | orchestrator | 2026-04-07 01:12:59 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:12:59.261641 | orchestrator | 2026-04-07 01:12:59 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state STARTED 2026-04-07 01:12:59.261650 | orchestrator | 2026-04-07 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:02.311344 | orchestrator | 2026-04-07 01:13:02 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:02.313959 | orchestrator | 2026-04-07 01:13:02 | INFO  | Task 78c6cef2-cb1e-44f1-807a-b4ab1fd7d624 is in state SUCCESS 2026-04-07 01:13:02.315267 | orchestrator | 2026-04-07 01:13:02.315302 | orchestrator | 2026-04-07 01:13:02.315308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:13:02.315312 | orchestrator | 2026-04-07 01:13:02.315317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:13:02.315324 | orchestrator | Tuesday 07 April 2026 01:10:54 +0000 (0:00:00.329) 0:00:00.329 ********* 2026-04-07 01:13:02.315330 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:13:02.315337 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:13:02.315343 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:13:02.315349 | orchestrator | 2026-04-07 01:13:02.315356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:13:02.315363 | orchestrator | Tuesday 07 April 2026 01:10:54 +0000 (0:00:00.275) 0:00:00.604 ********* 2026-04-07 01:13:02.315370 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-07 01:13:02.315377 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-07 01:13:02.315384 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-07 01:13:02.315391 | orchestrator | 2026-04-07 01:13:02.315397 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-07 01:13:02.315404 | orchestrator | 2026-04-07 01:13:02.315410 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-07 01:13:02.315417 | orchestrator | Tuesday 07 April 2026 01:10:55 +0000 (0:00:00.369) 0:00:00.974 ********* 2026-04-07 01:13:02.315424 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:13:02.315432 | orchestrator | 2026-04-07 01:13:02.315438 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-07 01:13:02.315445 | orchestrator | Tuesday 07 April 2026 01:10:55 +0000 (0:00:00.576) 0:00:01.551 ********* 2026-04-07 01:13:02.315454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315735 | orchestrator | 2026-04-07 01:13:02.315752 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-07 01:13:02.315760 | orchestrator | Tuesday 07 April 2026 01:10:56 +0000 (0:00:01.108) 0:00:02.659 ********* 2026-04-07 01:13:02.315767 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-07 01:13:02.315774 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-07 01:13:02.315781 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:13:02.315787 | orchestrator | 2026-04-07 01:13:02.315794 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-07 01:13:02.315800 | orchestrator | Tuesday 07 April 2026 01:10:58 +0000 (0:00:01.236) 0:00:03.895 ********* 2026-04-07 01:13:02.315807 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:13:02.315813 | orchestrator | 2026-04-07 01:13:02.315819 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-07 01:13:02.315826 | orchestrator | Tuesday 07 April 2026 01:10:58 +0000 (0:00:00.599) 0:00:04.495 ********* 2026-04-07 01:13:02.315840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.315869 | orchestrator | 2026-04-07 01:13:02.315877 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-07 01:13:02.315884 | orchestrator | Tuesday 07 April 2026 01:11:00 +0000 (0:00:01.830) 0:00:06.325 ********* 2026-04-07 01:13:02.315891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315908 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.315915 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.315928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315935 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.315942 | orchestrator | 2026-04-07 01:13:02.315949 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-07 01:13:02.315955 | orchestrator | Tuesday 07 April 2026 01:11:01 +0000 (0:00:00.453) 0:00:06.778 ********* 2026-04-07 01:13:02.315961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315979 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.315985 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.315992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 01:13:02.315997 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.316001 | orchestrator | 2026-04-07 01:13:02.316005 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-07 01:13:02.316008 | orchestrator | Tuesday 07 April 2026 01:11:01 +0000 (0:00:00.757) 0:00:07.536 ********* 2026-04-07 01:13:02.316015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316036 | orchestrator | 2026-04-07 01:13:02.316042 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-07 01:13:02.316048 | orchestrator | Tuesday 07 April 2026 01:11:03 +0000 (0:00:01.318) 0:00:08.855 ********* 2026-04-07 01:13:02.316054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.316081 | orchestrator | 2026-04-07 01:13:02.316087 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-07 01:13:02.316094 | orchestrator | Tuesday 07 April 2026 01:11:04 +0000 (0:00:01.242) 0:00:10.097 ********* 2026-04-07 01:13:02.316101 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.316107 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.316114 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.316120 | orchestrator | 2026-04-07 01:13:02.316126 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-07 01:13:02.316133 | orchestrator | Tuesday 07 April 2026 01:11:04 +0000 (0:00:00.295) 0:00:10.393 ********* 2026-04-07 01:13:02.316195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 01:13:02.316204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 01:13:02.316208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 01:13:02.316212 | orchestrator | 2026-04-07 01:13:02.316216 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-07 01:13:02.316219 | orchestrator | Tuesday 07 April 2026 01:11:05 +0000 (0:00:01.232) 0:00:11.625 ********* 2026-04-07 01:13:02.316223 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 01:13:02.316228 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 01:13:02.316232 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 01:13:02.316235 | orchestrator | 2026-04-07 01:13:02.316240 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-07 01:13:02.316247 | orchestrator | Tuesday 07 April 2026 01:11:07 +0000 (0:00:01.278) 0:00:12.903 ********* 2026-04-07 01:13:02.316285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:13:02.316295 | orchestrator | 2026-04-07 01:13:02.316301 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-07 01:13:02.316603 | orchestrator | Tuesday 07 April 2026 01:11:08 +0000 (0:00:00.945) 0:00:13.849 ********* 2026-04-07 01:13:02.316615 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-07 01:13:02.316626 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-07 01:13:02.316631 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:13:02.316636 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:13:02.316641 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:13:02.316646 | orchestrator | 2026-04-07 01:13:02.316650 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-07 01:13:02.316655 | orchestrator | Tuesday 07 April 2026 01:11:08 +0000 (0:00:00.733) 0:00:14.582 ********* 2026-04-07 01:13:02.316659 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.316664 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.316668 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.316673 | orchestrator | 2026-04-07 01:13:02.316678 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-07 01:13:02.316682 | orchestrator | Tuesday 07 April 2026 01:11:09 +0000 (0:00:00.290) 0:00:14.873 ********* 2026-04-07 01:13:02.316687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1312001, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1312001, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1312001, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1312037, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1312037, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1312037, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1312482, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.629532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1312482, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.629532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1312482, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.629532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312029, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312029, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312029, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.49153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1312485, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6313882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1312485, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6313882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1312485, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6313882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1312011, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4879618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1312011, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4879618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1312011, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4879618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1312458, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.622532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1312458, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.622532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1312458, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.622532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1312474, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1312474, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1312474, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312000, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4845297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312000, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4845297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312000, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4845297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312010, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312010, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312010, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4865298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312034, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4925299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312034, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4925299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312034, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4925299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1312462, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.624532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1312462, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.624532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1312462, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.624532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1312480, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1312480, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1312480, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312025, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4910414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312025, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4910414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312025, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.4910414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.316987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1312469, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.626532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1312492, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.631532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1312469, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.626532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1312469, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.626532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1312461, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6231277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1312492, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.631532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1312492, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.631532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1312454, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.62219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1312461, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6231277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1312461, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6231277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1312453, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6205318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1312454, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.62219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1312454, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.62219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1312464, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.625532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1312453, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6205318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1312453, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6205318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1312450, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.619379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1312464, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.625532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1312464, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.625532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1312479, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1312020, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.489998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1312450, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.619379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1312450, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.619379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312756, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6912072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1312479, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1312479, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.628532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312528, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6685326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1312020, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.489998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312509, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6348398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1312020, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.489998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312756, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6912072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1312659, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6715326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312756, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6912072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317236 | orchestrator | cha2026-04-07 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:02.317242 | orchestrator | nged: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312528, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6685326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312498, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6325321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312528, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6685326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312703, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312509, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6348398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312509, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6348398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312666, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6795328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1312659, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6715326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1312659, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6715326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1312709, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312498, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6325321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312498, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6325321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312748, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.689533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312703, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312703, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1312700, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312666, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6795328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312666, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6795328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312653, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6705327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1312709, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1312709, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6821911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312523, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6383119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312748, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.689533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312650, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6695325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312748, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.689533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1312700, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312510, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6371856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1312700, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312653, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6705327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1312657, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6713068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312653, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6705327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312523, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6383119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312523, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6383119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312736, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6887853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312650, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6695325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312650, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6695325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312720, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.686661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312510, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6371856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312510, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6371856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312500, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6332347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1312657, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6713068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312504, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.634348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1312657, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6713068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312736, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6887853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312699, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312736, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6887853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312720, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.686661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1312712, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6844208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312720, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.686661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312500, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6332347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312500, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6332347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312504, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.634348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312504, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.634348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312699, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312699, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.680533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1312712, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6844208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1312712, 'dev': 116, 'nlink': 1, 'atime': 1775520151.0, 'mtime': 1775520151.0, 'ctime': 1775521304.6844208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 01:13:02.317614 | orchestrator | 2026-04-07 01:13:02.317618 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-07 01:13:02.317622 | orchestrator | Tuesday 07 April 2026 01:11:50 +0000 (0:00:41.644) 0:00:56.518 ********* 2026-04-07 01:13:02.317626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.317630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.317634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 01:13:02.317638 | orchestrator | 2026-04-07 01:13:02.317642 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-07 01:13:02.317646 | orchestrator | Tuesday 07 April 2026 01:11:51 +0000 (0:00:01.233) 0:00:57.751 ********* 2026-04-07 01:13:02.317652 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:13:02.317656 | orchestrator | 2026-04-07 01:13:02.317659 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-07 01:13:02.317663 | orchestrator | Tuesday 07 April 2026 01:11:54 +0000 (0:00:02.278) 0:01:00.030 ********* 2026-04-07 01:13:02.317667 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:13:02.317671 | orchestrator | 2026-04-07 01:13:02.317675 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 01:13:02.317678 | orchestrator | Tuesday 07 April 2026 01:11:56 +0000 (0:00:02.209) 0:01:02.240 ********* 2026-04-07 01:13:02.317682 | orchestrator | 2026-04-07 01:13:02.317686 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 01:13:02.317692 | orchestrator | Tuesday 07 April 2026 01:11:56 +0000 (0:00:00.063) 0:01:02.303 ********* 2026-04-07 01:13:02.317696 | orchestrator | 2026-04-07 01:13:02.317700 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 01:13:02.317704 | orchestrator | Tuesday 07 April 2026 01:11:56 +0000 (0:00:00.060) 0:01:02.363 ********* 2026-04-07 01:13:02.317707 | orchestrator | 2026-04-07 01:13:02.317711 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-07 01:13:02.317715 | orchestrator | Tuesday 07 April 2026 01:11:56 +0000 (0:00:00.068) 0:01:02.431 ********* 2026-04-07 01:13:02.317720 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.317724 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.317728 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:13:02.317732 | orchestrator | 2026-04-07 01:13:02.317736 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-07 01:13:02.317740 | orchestrator | Tuesday 07 April 2026 01:12:03 +0000 (0:00:06.805) 0:01:09.237 ********* 2026-04-07 01:13:02.317743 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.317747 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.317751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-07 01:13:02.317755 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-07 01:13:02.317759 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:13:02.317763 | orchestrator | 2026-04-07 01:13:02.317766 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-07 01:13:02.317770 | orchestrator | Tuesday 07 April 2026 01:12:29 +0000 (0:00:25.949) 0:01:35.186 ********* 2026-04-07 01:13:02.317774 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.317778 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:13:02.317781 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:13:02.317785 | orchestrator | 2026-04-07 01:13:02.317789 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-07 01:13:02.317793 | orchestrator | Tuesday 07 April 2026 01:12:55 +0000 (0:00:25.688) 0:02:00.875 ********* 2026-04-07 01:13:02.317796 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:13:02.317800 | orchestrator | 2026-04-07 01:13:02.317804 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-07 01:13:02.317808 | orchestrator | Tuesday 07 April 2026 01:12:57 +0000 (0:00:02.225) 0:02:03.100 ********* 2026-04-07 01:13:02.317811 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.317815 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:13:02.317819 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:13:02.317823 | orchestrator | 2026-04-07 01:13:02.317826 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-07 01:13:02.317830 | orchestrator | Tuesday 07 April 2026 01:12:57 +0000 (0:00:00.281) 0:02:03.382 ********* 2026-04-07 01:13:02.317835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-07 01:13:02.317839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-07 01:13:02.317843 | orchestrator | 2026-04-07 01:13:02.317847 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-07 01:13:02.317851 | orchestrator | Tuesday 07 April 2026 01:12:59 +0000 (0:00:02.255) 0:02:05.637 ********* 2026-04-07 01:13:02.317858 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:13:02.317864 | orchestrator | 2026-04-07 01:13:02.317874 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:13:02.317881 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:13:02.317888 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:13:02.317894 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:13:02.317900 | orchestrator | 2026-04-07 01:13:02.317906 | orchestrator | 2026-04-07 01:13:02.317913 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:13:02.317918 | orchestrator | Tuesday 07 April 2026 01:13:00 +0000 (0:00:00.391) 0:02:06.028 ********* 2026-04-07 01:13:02.317924 | orchestrator | =============================================================================== 2026-04-07 01:13:02.317933 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.64s 2026-04-07 01:13:02.317940 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 25.95s 2026-04-07 01:13:02.317946 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.69s 2026-04-07 01:13:02.317953 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.81s 2026-04-07 01:13:02.317959 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.28s 2026-04-07 01:13:02.317966 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.26s 2026-04-07 01:13:02.317971 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.23s 2026-04-07 01:13:02.317974 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2026-04-07 01:13:02.317978 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.83s 2026-04-07 01:13:02.317982 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2026-04-07 01:13:02.317986 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.28s 2026-04-07 01:13:02.317993 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.24s 2026-04-07 01:13:02.317997 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.24s 2026-04-07 01:13:02.318000 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.23s 2026-04-07 01:13:02.318004 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.23s 2026-04-07 01:13:02.318008 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.11s 2026-04-07 01:13:02.318037 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.95s 2026-04-07 01:13:02.318043 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.76s 2026-04-07 01:13:02.318047 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-04-07 01:13:02.318051 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2026-04-07 01:13:05.357368 | orchestrator | 2026-04-07 01:13:05 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:05.357416 | orchestrator | 2026-04-07 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:08.397346 | orchestrator | 2026-04-07 01:13:08 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:08.397402 | orchestrator | 2026-04-07 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:11.440963 | orchestrator | 2026-04-07 01:13:11 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:11.441051 | orchestrator | 2026-04-07 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:14.488216 | orchestrator | 2026-04-07 01:13:14 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:14.488390 | orchestrator | 2026-04-07 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:17.529885 | orchestrator | 2026-04-07 01:13:17 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:17.529977 | orchestrator | 2026-04-07 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:20.577193 | orchestrator | 2026-04-07 01:13:20 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:20.577330 | orchestrator | 2026-04-07 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:23.620979 | orchestrator | 2026-04-07 01:13:23 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:23.621065 | orchestrator | 2026-04-07 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:26.666917 | orchestrator | 2026-04-07 01:13:26 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:26.667018 | orchestrator | 2026-04-07 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:29.710562 | orchestrator | 2026-04-07 01:13:29 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:29.710636 | orchestrator | 2026-04-07 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:32.750398 | orchestrator | 2026-04-07 01:13:32 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:32.750489 | orchestrator | 2026-04-07 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:35.792684 | orchestrator | 2026-04-07 01:13:35 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:35.793637 | orchestrator | 2026-04-07 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:38.838368 | orchestrator | 2026-04-07 01:13:38 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:38.838485 | orchestrator | 2026-04-07 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:41.877373 | orchestrator | 2026-04-07 01:13:41 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:41.877492 | orchestrator | 2026-04-07 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:44.923113 | orchestrator | 2026-04-07 01:13:44 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:44.924901 | orchestrator | 2026-04-07 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:47.967732 | orchestrator | 2026-04-07 01:13:47 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:47.967852 | orchestrator | 2026-04-07 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:51.026996 | orchestrator | 2026-04-07 01:13:51 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:51.027079 | orchestrator | 2026-04-07 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:54.064733 | orchestrator | 2026-04-07 01:13:54 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:54.064841 | orchestrator | 2026-04-07 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:13:57.126408 | orchestrator | 2026-04-07 01:13:57 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:13:57.126509 | orchestrator | 2026-04-07 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:00.166298 | orchestrator | 2026-04-07 01:14:00 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:00.166401 | orchestrator | 2026-04-07 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:03.208425 | orchestrator | 2026-04-07 01:14:03 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:03.208496 | orchestrator | 2026-04-07 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:06.258224 | orchestrator | 2026-04-07 01:14:06 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:06.258347 | orchestrator | 2026-04-07 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:09.293724 | orchestrator | 2026-04-07 01:14:09 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:09.293791 | orchestrator | 2026-04-07 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:12.346052 | orchestrator | 2026-04-07 01:14:12 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:12.346122 | orchestrator | 2026-04-07 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:15.400361 | orchestrator | 2026-04-07 01:14:15 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:15.400449 | orchestrator | 2026-04-07 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:18.446937 | orchestrator | 2026-04-07 01:14:18 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:18.447007 | orchestrator | 2026-04-07 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:21.494201 | orchestrator | 2026-04-07 01:14:21 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:21.494335 | orchestrator | 2026-04-07 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:24.534544 | orchestrator | 2026-04-07 01:14:24 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:24.534643 | orchestrator | 2026-04-07 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:27.576184 | orchestrator | 2026-04-07 01:14:27 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:27.576318 | orchestrator | 2026-04-07 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:30.628761 | orchestrator | 2026-04-07 01:14:30 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:30.628833 | orchestrator | 2026-04-07 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:33.674369 | orchestrator | 2026-04-07 01:14:33 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:33.674444 | orchestrator | 2026-04-07 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:36.716965 | orchestrator | 2026-04-07 01:14:36 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:36.717073 | orchestrator | 2026-04-07 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:39.759895 | orchestrator | 2026-04-07 01:14:39 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:39.759978 | orchestrator | 2026-04-07 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:42.807519 | orchestrator | 2026-04-07 01:14:42 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:42.807595 | orchestrator | 2026-04-07 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:45.853438 | orchestrator | 2026-04-07 01:14:45 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:45.853538 | orchestrator | 2026-04-07 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:48.903171 | orchestrator | 2026-04-07 01:14:48 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:48.903316 | orchestrator | 2026-04-07 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:51.947757 | orchestrator | 2026-04-07 01:14:51 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:51.947816 | orchestrator | 2026-04-07 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:55.018745 | orchestrator | 2026-04-07 01:14:55 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:55.018829 | orchestrator | 2026-04-07 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:14:58.063718 | orchestrator | 2026-04-07 01:14:58 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:14:58.063809 | orchestrator | 2026-04-07 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:01.108936 | orchestrator | 2026-04-07 01:15:01 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:01.108985 | orchestrator | 2026-04-07 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:04.153616 | orchestrator | 2026-04-07 01:15:04 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:04.153698 | orchestrator | 2026-04-07 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:07.187665 | orchestrator | 2026-04-07 01:15:07 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:07.187757 | orchestrator | 2026-04-07 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:10.234434 | orchestrator | 2026-04-07 01:15:10 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:10.234505 | orchestrator | 2026-04-07 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:13.267577 | orchestrator | 2026-04-07 01:15:13 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:13.267634 | orchestrator | 2026-04-07 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:16.311514 | orchestrator | 2026-04-07 01:15:16 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:16.311580 | orchestrator | 2026-04-07 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:19.349728 | orchestrator | 2026-04-07 01:15:19 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:19.349778 | orchestrator | 2026-04-07 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:22.385144 | orchestrator | 2026-04-07 01:15:22 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:22.385202 | orchestrator | 2026-04-07 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:25.426093 | orchestrator | 2026-04-07 01:15:25 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:25.426185 | orchestrator | 2026-04-07 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:28.469006 | orchestrator | 2026-04-07 01:15:28 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:28.469103 | orchestrator | 2026-04-07 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:31.508624 | orchestrator | 2026-04-07 01:15:31 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:31.508675 | orchestrator | 2026-04-07 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:34.553422 | orchestrator | 2026-04-07 01:15:34 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:34.553534 | orchestrator | 2026-04-07 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:37.599417 | orchestrator | 2026-04-07 01:15:37 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:37.599484 | orchestrator | 2026-04-07 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:40.636082 | orchestrator | 2026-04-07 01:15:40 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state STARTED 2026-04-07 01:15:40.636138 | orchestrator | 2026-04-07 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-07 01:15:43.682326 | orchestrator | 2026-04-07 01:15:43 | INFO  | Task da243fcf-d1ad-4d4a-8aa1-b4a3a91252e3 is in state SUCCESS 2026-04-07 01:15:43.683979 | orchestrator | 2026-04-07 01:15:43.684032 | orchestrator | 2026-04-07 01:15:43.684040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:15:43.684048 | orchestrator | 2026-04-07 01:15:43.684054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 01:15:43.684061 | orchestrator | Tuesday 07 April 2026 01:11:05 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-04-07 01:15:43.684067 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.684075 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:15:43.684082 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:15:43.684088 | orchestrator | 2026-04-07 01:15:43.684094 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 01:15:43.684101 | orchestrator | Tuesday 07 April 2026 01:11:06 +0000 (0:00:00.299) 0:00:00.608 ********* 2026-04-07 01:15:43.684107 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-07 01:15:43.684114 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-07 01:15:43.684120 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-07 01:15:43.684126 | orchestrator | 2026-04-07 01:15:43.684133 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-07 01:15:43.684139 | orchestrator | 2026-04-07 01:15:43.684145 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.684151 | orchestrator | Tuesday 07 April 2026 01:11:06 +0000 (0:00:00.293) 0:00:00.902 ********* 2026-04-07 01:15:43.684158 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:15:43.684215 | orchestrator | 2026-04-07 01:15:43.684222 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-07 01:15:43.684226 | orchestrator | Tuesday 07 April 2026 01:11:06 +0000 (0:00:00.679) 0:00:01.581 ********* 2026-04-07 01:15:43.684230 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-07 01:15:43.684234 | orchestrator | 2026-04-07 01:15:43.684238 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-07 01:15:43.684241 | orchestrator | Tuesday 07 April 2026 01:11:10 +0000 (0:00:03.924) 0:00:05.505 ********* 2026-04-07 01:15:43.684275 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-07 01:15:43.684280 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-07 01:15:43.684284 | orchestrator | 2026-04-07 01:15:43.684288 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-07 01:15:43.684292 | orchestrator | Tuesday 07 April 2026 01:11:16 +0000 (0:00:06.077) 0:00:11.583 ********* 2026-04-07 01:15:43.684296 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 01:15:43.684314 | orchestrator | 2026-04-07 01:15:43.684319 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-07 01:15:43.684322 | orchestrator | Tuesday 07 April 2026 01:11:19 +0000 (0:00:02.961) 0:00:14.544 ********* 2026-04-07 01:15:43.684326 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-07 01:15:43.684330 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-07 01:15:43.684334 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 01:15:43.684338 | orchestrator | 2026-04-07 01:15:43.684341 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-07 01:15:43.684345 | orchestrator | Tuesday 07 April 2026 01:11:27 +0000 (0:00:07.092) 0:00:21.636 ********* 2026-04-07 01:15:43.684374 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 01:15:43.684379 | orchestrator | 2026-04-07 01:15:43.684383 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-07 01:15:43.684387 | orchestrator | Tuesday 07 April 2026 01:11:30 +0000 (0:00:03.069) 0:00:24.706 ********* 2026-04-07 01:15:43.684391 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-07 01:15:43.684395 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-07 01:15:43.684398 | orchestrator | 2026-04-07 01:15:43.684402 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-07 01:15:43.684406 | orchestrator | Tuesday 07 April 2026 01:11:37 +0000 (0:00:07.054) 0:00:31.760 ********* 2026-04-07 01:15:43.684410 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-07 01:15:43.684414 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-07 01:15:43.684417 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-07 01:15:43.684421 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-07 01:15:43.684425 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-07 01:15:43.684429 | orchestrator | 2026-04-07 01:15:43.684432 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.684436 | orchestrator | Tuesday 07 April 2026 01:11:52 +0000 (0:00:15.223) 0:00:46.984 ********* 2026-04-07 01:15:43.684447 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:15:43.684451 | orchestrator | 2026-04-07 01:15:43.684455 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-07 01:15:43.684459 | orchestrator | Tuesday 07 April 2026 01:11:53 +0000 (0:00:00.678) 0:00:47.662 ********* 2026-04-07 01:15:43.684463 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684508 | orchestrator | 2026-04-07 01:15:43.684512 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-07 01:15:43.684516 | orchestrator | Tuesday 07 April 2026 01:11:58 +0000 (0:00:05.324) 0:00:52.986 ********* 2026-04-07 01:15:43.684520 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684523 | orchestrator | 2026-04-07 01:15:43.684527 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-07 01:15:43.684542 | orchestrator | Tuesday 07 April 2026 01:12:01 +0000 (0:00:03.562) 0:00:56.549 ********* 2026-04-07 01:15:43.684546 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.684550 | orchestrator | 2026-04-07 01:15:43.684554 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-07 01:15:43.684557 | orchestrator | Tuesday 07 April 2026 01:12:05 +0000 (0:00:03.290) 0:00:59.839 ********* 2026-04-07 01:15:43.684561 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-07 01:15:43.684565 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-07 01:15:43.684569 | orchestrator | 2026-04-07 01:15:43.684573 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-07 01:15:43.684576 | orchestrator | Tuesday 07 April 2026 01:12:15 +0000 (0:00:10.309) 0:01:10.149 ********* 2026-04-07 01:15:43.684586 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-07 01:15:43.684590 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-07 01:15:43.684595 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-07 01:15:43.684599 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-07 01:15:43.684604 | orchestrator | 2026-04-07 01:15:43.684610 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-07 01:15:43.684616 | orchestrator | Tuesday 07 April 2026 01:12:30 +0000 (0:00:14.727) 0:01:24.877 ********* 2026-04-07 01:15:43.684621 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684627 | orchestrator | 2026-04-07 01:15:43.684633 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-07 01:15:43.684639 | orchestrator | Tuesday 07 April 2026 01:12:34 +0000 (0:00:04.449) 0:01:29.327 ********* 2026-04-07 01:15:43.684645 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684650 | orchestrator | 2026-04-07 01:15:43.684657 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-07 01:15:43.684662 | orchestrator | Tuesday 07 April 2026 01:12:39 +0000 (0:00:04.900) 0:01:34.227 ********* 2026-04-07 01:15:43.684669 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.684675 | orchestrator | 2026-04-07 01:15:43.684681 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-07 01:15:43.684685 | orchestrator | Tuesday 07 April 2026 01:12:40 +0000 (0:00:00.536) 0:01:34.763 ********* 2026-04-07 01:15:43.684689 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.684692 | orchestrator | 2026-04-07 01:15:43.684696 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.684700 | orchestrator | Tuesday 07 April 2026 01:12:44 +0000 (0:00:04.487) 0:01:39.251 ********* 2026-04-07 01:15:43.684905 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:15:43.684915 | orchestrator | 2026-04-07 01:15:43.684921 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-07 01:15:43.684928 | orchestrator | Tuesday 07 April 2026 01:12:45 +0000 (0:00:00.830) 0:01:40.081 ********* 2026-04-07 01:15:43.684934 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.684940 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684947 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.684953 | orchestrator | 2026-04-07 01:15:43.684960 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-07 01:15:43.684964 | orchestrator | Tuesday 07 April 2026 01:12:51 +0000 (0:00:05.956) 0:01:46.038 ********* 2026-04-07 01:15:43.684969 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.684975 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.684981 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.684988 | orchestrator | 2026-04-07 01:15:43.684993 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-07 01:15:43.685000 | orchestrator | Tuesday 07 April 2026 01:12:56 +0000 (0:00:04.964) 0:01:51.002 ********* 2026-04-07 01:15:43.685007 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.685013 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.685019 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.685025 | orchestrator | 2026-04-07 01:15:43.685029 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-07 01:15:43.685032 | orchestrator | Tuesday 07 April 2026 01:12:57 +0000 (0:00:00.698) 0:01:51.701 ********* 2026-04-07 01:15:43.685036 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:15:43.685040 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:15:43.685050 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685054 | orchestrator | 2026-04-07 01:15:43.685057 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-07 01:15:43.685061 | orchestrator | Tuesday 07 April 2026 01:12:58 +0000 (0:00:01.676) 0:01:53.377 ********* 2026-04-07 01:15:43.685065 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.685073 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.685077 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.685081 | orchestrator | 2026-04-07 01:15:43.685085 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-07 01:15:43.685089 | orchestrator | Tuesday 07 April 2026 01:12:59 +0000 (0:00:01.188) 0:01:54.566 ********* 2026-04-07 01:15:43.685093 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.685096 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.685100 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.685104 | orchestrator | 2026-04-07 01:15:43.685108 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-07 01:15:43.685112 | orchestrator | Tuesday 07 April 2026 01:13:01 +0000 (0:00:01.203) 0:01:55.769 ********* 2026-04-07 01:15:43.685115 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.685119 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.685123 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.685127 | orchestrator | 2026-04-07 01:15:43.685136 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-07 01:15:43.685139 | orchestrator | Tuesday 07 April 2026 01:13:03 +0000 (0:00:02.222) 0:01:57.991 ********* 2026-04-07 01:15:43.685143 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.685147 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.685151 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.685154 | orchestrator | 2026-04-07 01:15:43.685158 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-07 01:15:43.685162 | orchestrator | Tuesday 07 April 2026 01:13:05 +0000 (0:00:02.324) 0:02:00.316 ********* 2026-04-07 01:15:43.685166 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685169 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:15:43.685173 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:15:43.685210 | orchestrator | 2026-04-07 01:15:43.685214 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-07 01:15:43.685218 | orchestrator | Tuesday 07 April 2026 01:13:06 +0000 (0:00:00.574) 0:02:00.891 ********* 2026-04-07 01:15:43.685222 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:15:43.685225 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:15:43.685229 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685233 | orchestrator | 2026-04-07 01:15:43.685236 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.685240 | orchestrator | Tuesday 07 April 2026 01:13:09 +0000 (0:00:02.806) 0:02:03.698 ********* 2026-04-07 01:15:43.685279 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:15:43.685284 | orchestrator | 2026-04-07 01:15:43.685448 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-07 01:15:43.685458 | orchestrator | Tuesday 07 April 2026 01:13:09 +0000 (0:00:00.682) 0:02:04.381 ********* 2026-04-07 01:15:43.685464 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685470 | orchestrator | 2026-04-07 01:15:43.685477 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-07 01:15:43.685483 | orchestrator | Tuesday 07 April 2026 01:13:13 +0000 (0:00:03.388) 0:02:07.769 ********* 2026-04-07 01:15:43.685489 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685495 | orchestrator | 2026-04-07 01:15:43.685501 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-07 01:15:43.685508 | orchestrator | Tuesday 07 April 2026 01:13:16 +0000 (0:00:02.989) 0:02:10.758 ********* 2026-04-07 01:15:43.685513 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-07 01:15:43.685518 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-07 01:15:43.685527 | orchestrator | 2026-04-07 01:15:43.685531 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-07 01:15:43.685535 | orchestrator | Tuesday 07 April 2026 01:13:22 +0000 (0:00:06.653) 0:02:17.412 ********* 2026-04-07 01:15:43.685538 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685542 | orchestrator | 2026-04-07 01:15:43.685546 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-07 01:15:43.685550 | orchestrator | Tuesday 07 April 2026 01:13:26 +0000 (0:00:03.242) 0:02:20.655 ********* 2026-04-07 01:15:43.685554 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:15:43.685558 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:15:43.685561 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:15:43.685565 | orchestrator | 2026-04-07 01:15:43.685569 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-07 01:15:43.685573 | orchestrator | Tuesday 07 April 2026 01:13:26 +0000 (0:00:00.289) 0:02:20.944 ********* 2026-04-07 01:15:43.685579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685688 | orchestrator | 2026-04-07 01:15:43.685691 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-07 01:15:43.685695 | orchestrator | Tuesday 07 April 2026 01:13:28 +0000 (0:00:02.551) 0:02:23.495 ********* 2026-04-07 01:15:43.685699 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.685703 | orchestrator | 2026-04-07 01:15:43.685716 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-07 01:15:43.685720 | orchestrator | Tuesday 07 April 2026 01:13:29 +0000 (0:00:00.124) 0:02:23.620 ********* 2026-04-07 01:15:43.685724 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.685727 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:15:43.685731 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:15:43.685735 | orchestrator | 2026-04-07 01:15:43.685739 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-07 01:15:43.685743 | orchestrator | Tuesday 07 April 2026 01:13:29 +0000 (0:00:00.310) 0:02:23.931 ********* 2026-04-07 01:15:43.685747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.685754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.685758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.685773 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.685788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.685795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.685800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.685811 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:15:43.685820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.685834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.685841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.685849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.685853 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:15:43.685857 | orchestrator | 2026-04-07 01:15:43.685860 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.685864 | orchestrator | Tuesday 07 April 2026 01:13:29 +0000 (0:00:00.632) 0:02:24.563 ********* 2026-04-07 01:15:43.685868 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:15:43.685872 | orchestrator | 2026-04-07 01:15:43.685876 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-07 01:15:43.685880 | orchestrator | Tuesday 07 April 2026 01:13:30 +0000 (0:00:00.734) 0:02:25.297 ********* 2026-04-07 01:15:43.685888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2026-04-07 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:15:43.685911 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.685921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.685935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.685992 | orchestrator | 2026-04-07 01:15:43.685996 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-07 01:15:43.686000 | orchestrator | Tuesday 07 April 2026 01:13:35 +0000 (0:00:05.091) 0:02:30.389 ********* 2026-04-07 01:15:43.686004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686078 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.686086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686106 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:15:43.686112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686138 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:15:43.686142 | orchestrator | 2026-04-07 01:15:43.686145 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-07 01:15:43.686149 | orchestrator | Tuesday 07 April 2026 01:13:36 +0000 (0:00:00.655) 0:02:31.045 ********* 2026-04-07 01:15:43.686153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686181 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.686185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686213 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:15:43.686217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 01:15:43.686221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 01:15:43.686225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 01:15:43.686239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 01:15:43.686257 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:15:43.686261 | orchestrator | 2026-04-07 01:15:43.686265 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-07 01:15:43.686269 | orchestrator | Tuesday 07 April 2026 01:13:37 +0000 (0:00:01.001) 0:02:32.047 ********* 2026-04-07 01:15:43.686277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686355 | orchestrator | 2026-04-07 01:15:43.686359 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-07 01:15:43.686362 | orchestrator | Tuesday 07 April 2026 01:13:42 +0000 (0:00:05.522) 0:02:37.569 ********* 2026-04-07 01:15:43.686370 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 01:15:43.686374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 01:15:43.686378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 01:15:43.686382 | orchestrator | 2026-04-07 01:15:43.686386 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-07 01:15:43.686389 | orchestrator | Tuesday 07 April 2026 01:13:44 +0000 (0:00:01.522) 0:02:39.091 ********* 2026-04-07 01:15:43.686393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686496 | orchestrator | 2026-04-07 01:15:43.686501 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-07 01:15:43.686507 | orchestrator | Tuesday 07 April 2026 01:14:01 +0000 (0:00:16.759) 0:02:55.851 ********* 2026-04-07 01:15:43.686513 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686518 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.686523 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.686529 | orchestrator | 2026-04-07 01:15:43.686534 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-07 01:15:43.686540 | orchestrator | Tuesday 07 April 2026 01:14:03 +0000 (0:00:01.936) 0:02:57.787 ********* 2026-04-07 01:15:43.686549 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686555 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686561 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686566 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686572 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686578 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686583 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686589 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686601 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686607 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686613 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686619 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686624 | orchestrator | 2026-04-07 01:15:43.686629 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-07 01:15:43.686635 | orchestrator | Tuesday 07 April 2026 01:14:08 +0000 (0:00:05.066) 0:03:02.854 ********* 2026-04-07 01:15:43.686641 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686647 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686654 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686659 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686665 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686671 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686677 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686683 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686689 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686695 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686701 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686707 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686713 | orchestrator | 2026-04-07 01:15:43.686718 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-07 01:15:43.686722 | orchestrator | Tuesday 07 April 2026 01:14:13 +0000 (0:00:05.280) 0:03:08.134 ********* 2026-04-07 01:15:43.686726 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686730 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686734 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 01:15:43.686738 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686741 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686745 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 01:15:43.686749 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686753 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686756 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 01:15:43.686760 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686764 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686767 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 01:15:43.686771 | orchestrator | 2026-04-07 01:15:43.686775 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-07 01:15:43.686779 | orchestrator | Tuesday 07 April 2026 01:14:18 +0000 (0:00:05.053) 0:03:13.188 ********* 2026-04-07 01:15:43.686785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 01:15:43.686808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 01:15:43.686822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 01:15:43.686875 | orchestrator | 2026-04-07 01:15:43.686879 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 01:15:43.686883 | orchestrator | Tuesday 07 April 2026 01:14:22 +0000 (0:00:04.092) 0:03:17.280 ********* 2026-04-07 01:15:43.686886 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:15:43.686890 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:15:43.686894 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:15:43.686897 | orchestrator | 2026-04-07 01:15:43.686901 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-07 01:15:43.686905 | orchestrator | Tuesday 07 April 2026 01:14:23 +0000 (0:00:00.455) 0:03:17.736 ********* 2026-04-07 01:15:43.686909 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686912 | orchestrator | 2026-04-07 01:15:43.686916 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-07 01:15:43.686920 | orchestrator | Tuesday 07 April 2026 01:14:25 +0000 (0:00:02.028) 0:03:19.765 ********* 2026-04-07 01:15:43.686923 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686927 | orchestrator | 2026-04-07 01:15:43.686931 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-07 01:15:43.686935 | orchestrator | Tuesday 07 April 2026 01:14:27 +0000 (0:00:01.966) 0:03:21.732 ********* 2026-04-07 01:15:43.686938 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686942 | orchestrator | 2026-04-07 01:15:43.686946 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-07 01:15:43.686949 | orchestrator | Tuesday 07 April 2026 01:14:29 +0000 (0:00:02.143) 0:03:23.875 ********* 2026-04-07 01:15:43.686953 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686957 | orchestrator | 2026-04-07 01:15:43.686961 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-07 01:15:43.686964 | orchestrator | Tuesday 07 April 2026 01:14:31 +0000 (0:00:02.121) 0:03:25.997 ********* 2026-04-07 01:15:43.686968 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.686972 | orchestrator | 2026-04-07 01:15:43.686975 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 01:15:43.686979 | orchestrator | Tuesday 07 April 2026 01:14:51 +0000 (0:00:20.503) 0:03:46.500 ********* 2026-04-07 01:15:43.686983 | orchestrator | 2026-04-07 01:15:43.686986 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 01:15:43.686990 | orchestrator | Tuesday 07 April 2026 01:14:51 +0000 (0:00:00.067) 0:03:46.568 ********* 2026-04-07 01:15:43.686994 | orchestrator | 2026-04-07 01:15:43.686998 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 01:15:43.687001 | orchestrator | Tuesday 07 April 2026 01:14:52 +0000 (0:00:00.066) 0:03:46.634 ********* 2026-04-07 01:15:43.687008 | orchestrator | 2026-04-07 01:15:43.687011 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-07 01:15:43.687015 | orchestrator | Tuesday 07 April 2026 01:14:52 +0000 (0:00:00.065) 0:03:46.700 ********* 2026-04-07 01:15:43.687019 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.687023 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.687027 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.687033 | orchestrator | 2026-04-07 01:15:43.687039 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-07 01:15:43.687048 | orchestrator | Tuesday 07 April 2026 01:15:06 +0000 (0:00:14.811) 0:04:01.511 ********* 2026-04-07 01:15:43.687058 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.687063 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.687068 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.687074 | orchestrator | 2026-04-07 01:15:43.687080 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-07 01:15:43.687085 | orchestrator | Tuesday 07 April 2026 01:15:18 +0000 (0:00:11.464) 0:04:12.976 ********* 2026-04-07 01:15:43.687091 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.687097 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.687103 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.687108 | orchestrator | 2026-04-07 01:15:43.687114 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-07 01:15:43.687119 | orchestrator | Tuesday 07 April 2026 01:15:28 +0000 (0:00:09.817) 0:04:22.794 ********* 2026-04-07 01:15:43.687125 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.687130 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.687136 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.687142 | orchestrator | 2026-04-07 01:15:43.687148 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-07 01:15:43.687153 | orchestrator | Tuesday 07 April 2026 01:15:37 +0000 (0:00:09.790) 0:04:32.585 ********* 2026-04-07 01:15:43.687160 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:15:43.687166 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:15:43.687173 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:15:43.687179 | orchestrator | 2026-04-07 01:15:43.687189 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:15:43.687196 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:15:43.687203 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:15:43.687207 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:15:43.687211 | orchestrator | 2026-04-07 01:15:43.687215 | orchestrator | 2026-04-07 01:15:43.687219 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:15:43.687227 | orchestrator | Tuesday 07 April 2026 01:15:43 +0000 (0:00:05.400) 0:04:37.986 ********* 2026-04-07 01:15:43.687231 | orchestrator | =============================================================================== 2026-04-07 01:15:43.687235 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.50s 2026-04-07 01:15:43.687238 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.76s 2026-04-07 01:15:43.687242 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.22s 2026-04-07 01:15:43.687263 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.81s 2026-04-07 01:15:43.687267 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.73s 2026-04-07 01:15:43.687271 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.46s 2026-04-07 01:15:43.687275 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.31s 2026-04-07 01:15:43.687283 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.82s 2026-04-07 01:15:43.687286 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.79s 2026-04-07 01:15:43.687290 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.09s 2026-04-07 01:15:43.687294 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.05s 2026-04-07 01:15:43.687298 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.65s 2026-04-07 01:15:43.687301 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.08s 2026-04-07 01:15:43.687305 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.96s 2026-04-07 01:15:43.687309 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.52s 2026-04-07 01:15:43.687313 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.40s 2026-04-07 01:15:43.687316 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.32s 2026-04-07 01:15:43.687320 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.28s 2026-04-07 01:15:43.687324 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.09s 2026-04-07 01:15:43.687328 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.07s 2026-04-07 01:15:46.730061 | orchestrator | 2026-04-07 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:15:49.770461 | orchestrator | 2026-04-07 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:15:52.813958 | orchestrator | 2026-04-07 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:15:55.856908 | orchestrator | 2026-04-07 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:15:58.899657 | orchestrator | 2026-04-07 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:01.933165 | orchestrator | 2026-04-07 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:04.974589 | orchestrator | 2026-04-07 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:08.021964 | orchestrator | 2026-04-07 01:16:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:11.070582 | orchestrator | 2026-04-07 01:16:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:14.113027 | orchestrator | 2026-04-07 01:16:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:17.157874 | orchestrator | 2026-04-07 01:16:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:20.195422 | orchestrator | 2026-04-07 01:16:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:23.240484 | orchestrator | 2026-04-07 01:16:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:26.280434 | orchestrator | 2026-04-07 01:16:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:29.321799 | orchestrator | 2026-04-07 01:16:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:32.357115 | orchestrator | 2026-04-07 01:16:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:35.396695 | orchestrator | 2026-04-07 01:16:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:38.437183 | orchestrator | 2026-04-07 01:16:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:41.489121 | orchestrator | 2026-04-07 01:16:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-07 01:16:44.529161 | orchestrator | 2026-04-07 01:16:44.712871 | orchestrator | 2026-04-07 01:16:44.718215 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Apr 7 01:16:44 UTC 2026 2026-04-07 01:16:44.718310 | orchestrator | 2026-04-07 01:16:45.128972 | orchestrator | ok: Runtime: 0:32:21.369354 2026-04-07 01:16:45.366967 | 2026-04-07 01:16:45.367111 | TASK [Bootstrap services] 2026-04-07 01:16:46.135520 | orchestrator | 2026-04-07 01:16:46.135698 | orchestrator | # BOOTSTRAP 2026-04-07 01:16:46.135716 | orchestrator | 2026-04-07 01:16:46.135727 | orchestrator | + set -e 2026-04-07 01:16:46.135738 | orchestrator | + echo 2026-04-07 01:16:46.135749 | orchestrator | + echo '# BOOTSTRAP' 2026-04-07 01:16:46.135763 | orchestrator | + echo 2026-04-07 01:16:46.135802 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-07 01:16:46.141560 | orchestrator | + set -e 2026-04-07 01:16:46.141633 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-07 01:16:50.888475 | orchestrator | 2026-04-07 01:16:50 | INFO  | It takes a moment until task 3918944b-b5de-472c-8777-8c17731eccf0 (flavor-manager) has been started and output is visible here. 2026-04-07 01:16:59.912352 | orchestrator | 2026-04-07 01:16:54 | INFO  | Flavor SCS-1L-1 created 2026-04-07 01:16:59.912467 | orchestrator | 2026-04-07 01:16:55 | INFO  | Flavor SCS-1L-1-5 created 2026-04-07 01:16:59.912480 | orchestrator | 2026-04-07 01:16:55 | INFO  | Flavor SCS-1V-2 created 2026-04-07 01:16:59.912487 | orchestrator | 2026-04-07 01:16:55 | INFO  | Flavor SCS-1V-2-5 created 2026-04-07 01:16:59.912494 | orchestrator | 2026-04-07 01:16:55 | INFO  | Flavor SCS-1V-4 created 2026-04-07 01:16:59.912500 | orchestrator | 2026-04-07 01:16:55 | INFO  | Flavor SCS-1V-4-10 created 2026-04-07 01:16:59.912506 | orchestrator | 2026-04-07 01:16:56 | INFO  | Flavor SCS-1V-8 created 2026-04-07 01:16:59.912512 | orchestrator | 2026-04-07 01:16:56 | INFO  | Flavor SCS-1V-8-20 created 2026-04-07 01:16:59.912525 | orchestrator | 2026-04-07 01:16:56 | INFO  | Flavor SCS-2V-4 created 2026-04-07 01:16:59.912532 | orchestrator | 2026-04-07 01:16:56 | INFO  | Flavor SCS-2V-4-10 created 2026-04-07 01:16:59.912539 | orchestrator | 2026-04-07 01:16:56 | INFO  | Flavor SCS-2V-8 created 2026-04-07 01:16:59.912546 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-2V-8-20 created 2026-04-07 01:16:59.912553 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-2V-16 created 2026-04-07 01:16:59.912559 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-2V-16-50 created 2026-04-07 01:16:59.912565 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-4V-8 created 2026-04-07 01:16:59.912583 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-4V-8-20 created 2026-04-07 01:16:59.912599 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-4V-16 created 2026-04-07 01:16:59.912605 | orchestrator | 2026-04-07 01:16:57 | INFO  | Flavor SCS-4V-16-50 created 2026-04-07 01:16:59.912615 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-4V-32 created 2026-04-07 01:16:59.912620 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-4V-32-100 created 2026-04-07 01:16:59.912624 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-8V-16 created 2026-04-07 01:16:59.912629 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-8V-16-50 created 2026-04-07 01:16:59.912638 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-8V-32 created 2026-04-07 01:16:59.912642 | orchestrator | 2026-04-07 01:16:58 | INFO  | Flavor SCS-8V-32-100 created 2026-04-07 01:16:59.912646 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-16V-32 created 2026-04-07 01:16:59.912650 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-16V-32-100 created 2026-04-07 01:16:59.912653 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-2V-4-20s created 2026-04-07 01:16:59.912657 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-4V-8-50s created 2026-04-07 01:16:59.912661 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-4V-16-100s created 2026-04-07 01:16:59.912665 | orchestrator | 2026-04-07 01:16:59 | INFO  | Flavor SCS-8V-32-100s created 2026-04-07 01:17:01.604703 | orchestrator | 2026-04-07 01:17:01 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-07 01:17:11.758088 | orchestrator | 2026-04-07 01:17:11 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-07 01:17:11.841297 | orchestrator | 2026-04-07 01:17:11 | INFO  | Task 78246e5e-4b01-4230-97f8-fdced342eb9c (bootstrap-basic) was prepared for execution. 2026-04-07 01:17:11.841372 | orchestrator | 2026-04-07 01:17:11 | INFO  | It takes a moment until task 78246e5e-4b01-4230-97f8-fdced342eb9c (bootstrap-basic) has been started and output is visible here. 2026-04-07 01:17:57.974332 | orchestrator | 2026-04-07 01:17:57.974446 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-07 01:17:57.974459 | orchestrator | 2026-04-07 01:17:57.974465 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:17:57.974472 | orchestrator | Tuesday 07 April 2026 01:17:15 +0000 (0:00:00.121) 0:00:00.121 ********* 2026-04-07 01:17:57.974479 | orchestrator | ok: [localhost] 2026-04-07 01:17:57.974486 | orchestrator | 2026-04-07 01:17:57.974492 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-07 01:17:57.974499 | orchestrator | Tuesday 07 April 2026 01:17:17 +0000 (0:00:02.022) 0:00:02.143 ********* 2026-04-07 01:17:57.974508 | orchestrator | ok: [localhost] 2026-04-07 01:17:57.974515 | orchestrator | 2026-04-07 01:17:57.974521 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-07 01:17:57.974528 | orchestrator | Tuesday 07 April 2026 01:17:25 +0000 (0:00:08.420) 0:00:10.564 ********* 2026-04-07 01:17:57.974535 | orchestrator | changed: [localhost] 2026-04-07 01:17:57.974543 | orchestrator | 2026-04-07 01:17:57.974549 | orchestrator | TASK [Create public network] *************************************************** 2026-04-07 01:17:57.974553 | orchestrator | Tuesday 07 April 2026 01:17:33 +0000 (0:00:08.209) 0:00:18.773 ********* 2026-04-07 01:17:57.974557 | orchestrator | changed: [localhost] 2026-04-07 01:17:57.974561 | orchestrator | 2026-04-07 01:17:57.974568 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-07 01:17:57.974572 | orchestrator | Tuesday 07 April 2026 01:17:39 +0000 (0:00:05.298) 0:00:24.072 ********* 2026-04-07 01:17:57.974577 | orchestrator | changed: [localhost] 2026-04-07 01:17:57.974581 | orchestrator | 2026-04-07 01:17:57.974586 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-07 01:17:57.974592 | orchestrator | Tuesday 07 April 2026 01:17:45 +0000 (0:00:06.625) 0:00:30.697 ********* 2026-04-07 01:17:57.974599 | orchestrator | changed: [localhost] 2026-04-07 01:17:57.974605 | orchestrator | 2026-04-07 01:17:57.974611 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-07 01:17:57.974616 | orchestrator | Tuesday 07 April 2026 01:17:50 +0000 (0:00:04.367) 0:00:35.064 ********* 2026-04-07 01:17:57.974622 | orchestrator | changed: [localhost] 2026-04-07 01:17:57.974627 | orchestrator | 2026-04-07 01:17:57.974632 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-07 01:17:57.974650 | orchestrator | Tuesday 07 April 2026 01:17:54 +0000 (0:00:04.002) 0:00:39.067 ********* 2026-04-07 01:17:57.974660 | orchestrator | ok: [localhost] 2026-04-07 01:17:57.974667 | orchestrator | 2026-04-07 01:17:57.974673 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:17:57.974679 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:17:57.974686 | orchestrator | 2026-04-07 01:17:57.974692 | orchestrator | 2026-04-07 01:17:57.974698 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:17:57.974705 | orchestrator | Tuesday 07 April 2026 01:17:57 +0000 (0:00:03.725) 0:00:42.792 ********* 2026-04-07 01:17:57.974710 | orchestrator | =============================================================================== 2026-04-07 01:17:57.974716 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.42s 2026-04-07 01:17:57.974742 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.21s 2026-04-07 01:17:57.974748 | orchestrator | Set public network to default ------------------------------------------- 6.63s 2026-04-07 01:17:57.974754 | orchestrator | Create public network --------------------------------------------------- 5.30s 2026-04-07 01:17:57.974760 | orchestrator | Create public subnet ---------------------------------------------------- 4.37s 2026-04-07 01:17:57.974766 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.00s 2026-04-07 01:17:57.974772 | orchestrator | Create manager role ----------------------------------------------------- 3.73s 2026-04-07 01:17:57.974778 | orchestrator | Gathering Facts --------------------------------------------------------- 2.02s 2026-04-07 01:18:00.089490 | orchestrator | 2026-04-07 01:18:00 | INFO  | It takes a moment until task be184558-a0e4-40b7-b6d3-8b5acccf6a93 (image-manager) has been started and output is visible here. 2026-04-07 01:18:42.982193 | orchestrator | 2026-04-07 01:18:03 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-07 01:18:42.982368 | orchestrator | 2026-04-07 01:18:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-07 01:18:42.982388 | orchestrator | 2026-04-07 01:18:03 | INFO  | Importing image Cirros 0.6.2 2026-04-07 01:18:42.982394 | orchestrator | 2026-04-07 01:18:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-07 01:18:42.982400 | orchestrator | 2026-04-07 01:18:05 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:18:42.982406 | orchestrator | 2026-04-07 01:18:07 | INFO  | Waiting for import to complete... 2026-04-07 01:18:42.982410 | orchestrator | 2026-04-07 01:18:18 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-07 01:18:42.982416 | orchestrator | 2026-04-07 01:18:18 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-07 01:18:42.982421 | orchestrator | 2026-04-07 01:18:18 | INFO  | Setting internal_version = 0.6.2 2026-04-07 01:18:42.982426 | orchestrator | 2026-04-07 01:18:18 | INFO  | Setting image_original_user = cirros 2026-04-07 01:18:42.982431 | orchestrator | 2026-04-07 01:18:18 | INFO  | Adding tag os:cirros 2026-04-07 01:18:42.982435 | orchestrator | 2026-04-07 01:18:18 | INFO  | Setting property architecture: x86_64 2026-04-07 01:18:42.982439 | orchestrator | 2026-04-07 01:18:18 | INFO  | Setting property hw_disk_bus: scsi 2026-04-07 01:18:42.982443 | orchestrator | 2026-04-07 01:18:19 | INFO  | Setting property hw_rng_model: virtio 2026-04-07 01:18:42.982448 | orchestrator | 2026-04-07 01:18:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-07 01:18:42.982453 | orchestrator | 2026-04-07 01:18:19 | INFO  | Setting property hw_watchdog_action: reset 2026-04-07 01:18:42.982457 | orchestrator | 2026-04-07 01:18:19 | INFO  | Setting property hypervisor_type: qemu 2026-04-07 01:18:42.982469 | orchestrator | 2026-04-07 01:18:20 | INFO  | Setting property os_distro: cirros 2026-04-07 01:18:42.982474 | orchestrator | 2026-04-07 01:18:20 | INFO  | Setting property os_purpose: minimal 2026-04-07 01:18:42.982478 | orchestrator | 2026-04-07 01:18:20 | INFO  | Setting property replace_frequency: never 2026-04-07 01:18:42.982482 | orchestrator | 2026-04-07 01:18:20 | INFO  | Setting property uuid_validity: none 2026-04-07 01:18:42.982486 | orchestrator | 2026-04-07 01:18:20 | INFO  | Setting property provided_until: none 2026-04-07 01:18:42.982490 | orchestrator | 2026-04-07 01:18:21 | INFO  | Setting property image_description: Cirros 2026-04-07 01:18:42.982495 | orchestrator | 2026-04-07 01:18:21 | INFO  | Setting property image_name: Cirros 2026-04-07 01:18:42.982515 | orchestrator | 2026-04-07 01:18:21 | INFO  | Setting property internal_version: 0.6.2 2026-04-07 01:18:42.982520 | orchestrator | 2026-04-07 01:18:21 | INFO  | Setting property image_original_user: cirros 2026-04-07 01:18:42.982524 | orchestrator | 2026-04-07 01:18:22 | INFO  | Setting property os_version: 0.6.2 2026-04-07 01:18:42.982529 | orchestrator | 2026-04-07 01:18:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-07 01:18:42.982535 | orchestrator | 2026-04-07 01:18:22 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-07 01:18:42.982539 | orchestrator | 2026-04-07 01:18:22 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-07 01:18:42.982552 | orchestrator | 2026-04-07 01:18:22 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-07 01:18:42.982566 | orchestrator | 2026-04-07 01:18:22 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-07 01:18:42.982571 | orchestrator | 2026-04-07 01:18:23 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-07 01:18:42.982575 | orchestrator | 2026-04-07 01:18:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-07 01:18:42.982579 | orchestrator | 2026-04-07 01:18:23 | INFO  | Importing image Cirros 0.6.3 2026-04-07 01:18:42.982583 | orchestrator | 2026-04-07 01:18:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-07 01:18:42.982588 | orchestrator | 2026-04-07 01:18:24 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:18:42.982592 | orchestrator | 2026-04-07 01:18:26 | INFO  | Waiting for import to complete... 2026-04-07 01:18:42.982610 | orchestrator | 2026-04-07 01:18:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-07 01:18:42.982614 | orchestrator | 2026-04-07 01:18:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-07 01:18:42.982618 | orchestrator | 2026-04-07 01:18:37 | INFO  | Setting internal_version = 0.6.3 2026-04-07 01:18:42.982622 | orchestrator | 2026-04-07 01:18:37 | INFO  | Setting image_original_user = cirros 2026-04-07 01:18:42.982627 | orchestrator | 2026-04-07 01:18:37 | INFO  | Adding tag os:cirros 2026-04-07 01:18:42.982631 | orchestrator | 2026-04-07 01:18:37 | INFO  | Setting property architecture: x86_64 2026-04-07 01:18:42.982635 | orchestrator | 2026-04-07 01:18:37 | INFO  | Setting property hw_disk_bus: scsi 2026-04-07 01:18:42.982640 | orchestrator | 2026-04-07 01:18:38 | INFO  | Setting property hw_rng_model: virtio 2026-04-07 01:18:42.982646 | orchestrator | 2026-04-07 01:18:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-07 01:18:42.982653 | orchestrator | 2026-04-07 01:18:38 | INFO  | Setting property hw_watchdog_action: reset 2026-04-07 01:18:42.982664 | orchestrator | 2026-04-07 01:18:38 | INFO  | Setting property hypervisor_type: qemu 2026-04-07 01:18:42.982671 | orchestrator | 2026-04-07 01:18:38 | INFO  | Setting property os_distro: cirros 2026-04-07 01:18:42.982679 | orchestrator | 2026-04-07 01:18:39 | INFO  | Setting property os_purpose: minimal 2026-04-07 01:18:42.982686 | orchestrator | 2026-04-07 01:18:39 | INFO  | Setting property replace_frequency: never 2026-04-07 01:18:42.982693 | orchestrator | 2026-04-07 01:18:39 | INFO  | Setting property uuid_validity: none 2026-04-07 01:18:42.982700 | orchestrator | 2026-04-07 01:18:40 | INFO  | Setting property provided_until: none 2026-04-07 01:18:42.982706 | orchestrator | 2026-04-07 01:18:40 | INFO  | Setting property image_description: Cirros 2026-04-07 01:18:42.982719 | orchestrator | 2026-04-07 01:18:40 | INFO  | Setting property image_name: Cirros 2026-04-07 01:18:42.982727 | orchestrator | 2026-04-07 01:18:40 | INFO  | Setting property internal_version: 0.6.3 2026-04-07 01:18:42.982734 | orchestrator | 2026-04-07 01:18:41 | INFO  | Setting property image_original_user: cirros 2026-04-07 01:18:42.982741 | orchestrator | 2026-04-07 01:18:41 | INFO  | Setting property os_version: 0.6.3 2026-04-07 01:18:42.982748 | orchestrator | 2026-04-07 01:18:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-07 01:18:42.982755 | orchestrator | 2026-04-07 01:18:41 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-07 01:18:42.982762 | orchestrator | 2026-04-07 01:18:42 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-07 01:18:42.982769 | orchestrator | 2026-04-07 01:18:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-07 01:18:42.982776 | orchestrator | 2026-04-07 01:18:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-07 01:18:43.285969 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-07 01:18:45.171164 | orchestrator | 2026-04-07 01:18:45 | INFO  | date: 2026-04-06 2026-04-07 01:18:45.171235 | orchestrator | 2026-04-07 01:18:45 | INFO  | image: octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-07 01:18:45.171507 | orchestrator | 2026-04-07 01:18:45 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-07 01:18:45.171599 | orchestrator | 2026-04-07 01:18:45 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2.CHECKSUM 2026-04-07 01:18:45.293067 | orchestrator | 2026-04-07 01:18:45 | INFO  | checksum: 3f9899e9aa23b19857b0120b3f03cecbbd707cd89f3778f002b8e98238de2633 2026-04-07 01:18:45.389287 | orchestrator | 2026-04-07 01:18:45 | INFO  | It takes a moment until task 7dc9ea88-0481-403a-9516-f13464020253 (image-manager) has been started and output is visible here. 2026-04-07 01:21:33.417897 | orchestrator | 2026-04-07 01:18:47 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-06' 2026-04-07 01:21:33.418125 | orchestrator | 2026-04-07 01:18:47 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2: 200 2026-04-07 01:21:33.418160 | orchestrator | 2026-04-07 01:18:47 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-06 2026-04-07 01:21:33.418182 | orchestrator | 2026-04-07 01:18:47 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-07 01:21:33.418204 | orchestrator | 2026-04-07 01:18:49 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418223 | orchestrator | 2026-04-07 01:18:51 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418241 | orchestrator | 2026-04-07 01:19:01 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418260 | orchestrator | 2026-04-07 01:19:11 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418278 | orchestrator | 2026-04-07 01:19:21 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418302 | orchestrator | 2026-04-07 01:19:31 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418320 | orchestrator | 2026-04-07 01:19:41 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418340 | orchestrator | 2026-04-07 01:19:52 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418433 | orchestrator | 2026-04-07 01:20:02 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418457 | orchestrator | 2026-04-07 01:20:04 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418483 | orchestrator | 2026-04-07 01:20:06 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418505 | orchestrator | 2026-04-07 01:20:08 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418524 | orchestrator | 2026-04-07 01:20:10 | ERROR  | Image OpenStack Octavia Amphora 2026-04-06 seems stuck in queued state 2026-04-07 01:21:33.418545 | orchestrator | 2026-04-07 01:20:10 | WARNING  | Deleting stuck image OpenStack Octavia Amphora 2026-04-06 and retrying import 2026-04-07 01:21:33.418563 | orchestrator | 2026-04-07 01:20:10 | INFO  | Retry attempt 1/1 for image OpenStack Octavia Amphora 2026-04-06 2026-04-07 01:21:33.418581 | orchestrator | 2026-04-07 01:20:12 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418599 | orchestrator | 2026-04-07 01:20:14 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418618 | orchestrator | 2026-04-07 01:20:24 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418637 | orchestrator | 2026-04-07 01:20:34 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418655 | orchestrator | 2026-04-07 01:20:44 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418675 | orchestrator | 2026-04-07 01:20:54 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418692 | orchestrator | 2026-04-07 01:21:04 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418710 | orchestrator | 2026-04-07 01:21:14 | INFO  | Waiting for import to complete... 2026-04-07 01:21:33.418727 | orchestrator | 2026-04-07 01:21:24 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418769 | orchestrator | 2026-04-07 01:21:26 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418790 | orchestrator | 2026-04-07 01:21:28 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418809 | orchestrator | 2026-04-07 01:21:30 | INFO  | Waiting for image to leave queued state... 2026-04-07 01:21:33.418828 | orchestrator | 2026-04-07 01:21:33 | ERROR  | Image OpenStack Octavia Amphora 2026-04-06 seems stuck in queued state 2026-04-07 01:21:33.418846 | orchestrator | 2026-04-07 01:21:33 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-07 01:21:33.418866 | orchestrator | 2026-04-07 01:21:33 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-07 01:21:33.418880 | orchestrator | 2026-04-07 01:21:33 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-07 01:21:33.418891 | orchestrator | 2026-04-07 01:21:33 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-07 01:21:33.418902 | orchestrator | 2026-04-07 01:21:33.418914 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-04-07 01:21:34.112008 | orchestrator | ERROR 2026-04-07 01:21:34.112416 | orchestrator | { 2026-04-07 01:21:34.112522 | orchestrator | "delta": "0:04:47.906028", 2026-04-07 01:21:34.112627 | orchestrator | "end": "2026-04-07 01:21:33.678451", 2026-04-07 01:21:34.112689 | orchestrator | "msg": "non-zero return code", 2026-04-07 01:21:34.112744 | orchestrator | "rc": 1, 2026-04-07 01:21:34.112800 | orchestrator | "start": "2026-04-07 01:16:45.772423" 2026-04-07 01:21:34.112854 | orchestrator | } failure 2026-04-07 01:21:34.127318 | 2026-04-07 01:21:34.127464 | PLAY RECAP 2026-04-07 01:21:34.127547 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-04-07 01:21:34.127615 | 2026-04-07 01:21:34.397278 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-07 01:21:34.398740 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-07 01:21:35.144967 | 2026-04-07 01:21:35.145209 | PLAY [Post output play] 2026-04-07 01:21:35.161964 | 2026-04-07 01:21:35.162125 | LOOP [stage-output : Register sources] 2026-04-07 01:21:35.232955 | 2026-04-07 01:21:35.233303 | TASK [stage-output : Check sudo] 2026-04-07 01:21:36.132021 | orchestrator | sudo: a password is required 2026-04-07 01:21:36.274178 | orchestrator | ok: Runtime: 0:00:00.010393 2026-04-07 01:21:36.288618 | 2026-04-07 01:21:36.288783 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-07 01:21:36.326627 | 2026-04-07 01:21:36.326922 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-07 01:21:36.394619 | orchestrator | ok 2026-04-07 01:21:36.403435 | 2026-04-07 01:21:36.403571 | LOOP [stage-output : Ensure target folders exist] 2026-04-07 01:21:36.939795 | orchestrator | ok: "docs" 2026-04-07 01:21:36.940130 | 2026-04-07 01:21:37.196931 | orchestrator | ok: "artifacts" 2026-04-07 01:21:37.475966 | orchestrator | ok: "logs" 2026-04-07 01:21:37.494533 | 2026-04-07 01:21:37.494726 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-07 01:21:37.529374 | 2026-04-07 01:21:37.529646 | TASK [stage-output : Make all log files readable] 2026-04-07 01:21:37.848914 | orchestrator | ok 2026-04-07 01:21:37.857680 | 2026-04-07 01:21:37.857813 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-07 01:21:37.892297 | orchestrator | skipping: Conditional result was False 2026-04-07 01:21:37.910885 | 2026-04-07 01:21:37.911085 | TASK [stage-output : Discover log files for compression] 2026-04-07 01:21:37.936176 | orchestrator | skipping: Conditional result was False 2026-04-07 01:21:37.951675 | 2026-04-07 01:21:37.951837 | LOOP [stage-output : Archive everything from logs] 2026-04-07 01:21:37.997952 | 2026-04-07 01:21:37.998122 | PLAY [Post cleanup play] 2026-04-07 01:21:38.008106 | 2026-04-07 01:21:38.008224 | TASK [Set cloud fact (Zuul deployment)] 2026-04-07 01:21:38.066550 | orchestrator | ok 2026-04-07 01:21:38.078259 | 2026-04-07 01:21:38.078374 | TASK [Set cloud fact (local deployment)] 2026-04-07 01:21:38.112293 | orchestrator | skipping: Conditional result was False 2026-04-07 01:21:38.127454 | 2026-04-07 01:21:38.127607 | TASK [Clean the cloud environment] 2026-04-07 01:21:38.765821 | orchestrator | 2026-04-07 01:21:38 - clean up servers 2026-04-07 01:21:39.524198 | orchestrator | 2026-04-07 01:21:39 - testbed-manager 2026-04-07 01:21:39.608471 | orchestrator | 2026-04-07 01:21:39 - testbed-node-3 2026-04-07 01:21:39.700414 | orchestrator | 2026-04-07 01:21:39 - testbed-node-0 2026-04-07 01:21:39.787262 | orchestrator | 2026-04-07 01:21:39 - testbed-node-2 2026-04-07 01:21:39.875512 | orchestrator | 2026-04-07 01:21:39 - testbed-node-5 2026-04-07 01:21:39.965519 | orchestrator | 2026-04-07 01:21:39 - testbed-node-1 2026-04-07 01:21:40.055709 | orchestrator | 2026-04-07 01:21:40 - testbed-node-4 2026-04-07 01:21:40.166657 | orchestrator | 2026-04-07 01:21:40 - clean up keypairs 2026-04-07 01:21:40.187562 | orchestrator | 2026-04-07 01:21:40 - testbed 2026-04-07 01:21:40.210695 | orchestrator | 2026-04-07 01:21:40 - wait for servers to be gone 2026-04-07 01:21:51.050304 | orchestrator | 2026-04-07 01:21:51 - clean up ports 2026-04-07 01:21:51.243839 | orchestrator | 2026-04-07 01:21:51 - 18b75ecb-33b6-45af-838e-ee749185598c 2026-04-07 01:21:51.493873 | orchestrator | 2026-04-07 01:21:51 - 1980d88b-207e-455b-8233-b54a4b425c02 2026-04-07 01:21:51.767252 | orchestrator | 2026-04-07 01:21:51 - 796dde89-e43e-420a-ac93-31afabfe1b59 2026-04-07 01:21:52.020681 | orchestrator | 2026-04-07 01:21:52 - a1fa3ccf-0fb8-4c83-b495-acb11da6a9bf 2026-04-07 01:21:52.413117 | orchestrator | 2026-04-07 01:21:52 - a392e08e-f648-414a-b5c1-fc9973af925f 2026-04-07 01:21:52.627814 | orchestrator | 2026-04-07 01:21:52 - aac85223-cab2-4fcf-8d5a-45eefc46a8f7 2026-04-07 01:21:52.862681 | orchestrator | 2026-04-07 01:21:52 - c14b769c-4685-41c0-bb07-09cfe1c60b9f 2026-04-07 01:21:53.080062 | orchestrator | 2026-04-07 01:21:53 - clean up volumes 2026-04-07 01:21:53.198318 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-3-node-base 2026-04-07 01:21:53.237210 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-5-node-base 2026-04-07 01:21:53.279641 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-4-node-base 2026-04-07 01:21:53.321604 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-manager-base 2026-04-07 01:21:53.364035 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-0-node-base 2026-04-07 01:21:53.405421 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-2-node-base 2026-04-07 01:21:53.449335 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-1-node-base 2026-04-07 01:21:53.499212 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-7-node-4 2026-04-07 01:21:53.548087 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-4-node-4 2026-04-07 01:21:53.587752 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-6-node-3 2026-04-07 01:21:53.631168 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-1-node-4 2026-04-07 01:21:53.670667 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-8-node-5 2026-04-07 01:21:53.712732 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-0-node-3 2026-04-07 01:21:53.755912 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-5-node-5 2026-04-07 01:21:53.797693 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-3-node-3 2026-04-07 01:21:53.836868 | orchestrator | 2026-04-07 01:21:53 - testbed-volume-2-node-5 2026-04-07 01:21:53.879048 | orchestrator | 2026-04-07 01:21:53 - disconnect routers 2026-04-07 01:21:54.032603 | orchestrator | 2026-04-07 01:21:54 - testbed 2026-04-07 01:21:54.943931 | orchestrator | 2026-04-07 01:21:54 - clean up subnets 2026-04-07 01:21:55.022601 | orchestrator | 2026-04-07 01:21:55 - subnet-testbed-management 2026-04-07 01:21:55.167717 | orchestrator | 2026-04-07 01:21:55 - clean up networks 2026-04-07 01:21:55.349665 | orchestrator | 2026-04-07 01:21:55 - net-testbed-management 2026-04-07 01:21:55.654470 | orchestrator | 2026-04-07 01:21:55 - clean up security groups 2026-04-07 01:21:55.697641 | orchestrator | 2026-04-07 01:21:55 - testbed-management 2026-04-07 01:21:55.821717 | orchestrator | 2026-04-07 01:21:55 - testbed-node 2026-04-07 01:21:55.940789 | orchestrator | 2026-04-07 01:21:55 - clean up floating ips 2026-04-07 01:21:55.975938 | orchestrator | 2026-04-07 01:21:55 - 81.163.193.15 2026-04-07 01:21:56.323316 | orchestrator | 2026-04-07 01:21:56 - clean up routers 2026-04-07 01:21:56.433808 | orchestrator | 2026-04-07 01:21:56 - testbed 2026-04-07 01:21:58.191094 | orchestrator | ok: Runtime: 0:00:19.417029 2026-04-07 01:21:58.195443 | 2026-04-07 01:21:58.195650 | PLAY RECAP 2026-04-07 01:21:58.195785 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-07 01:21:58.195873 | 2026-04-07 01:21:58.332370 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-07 01:21:58.333437 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-07 01:21:59.124411 | 2026-04-07 01:21:59.124593 | PLAY [Cleanup play] 2026-04-07 01:21:59.140906 | 2026-04-07 01:21:59.141041 | TASK [Set cloud fact (Zuul deployment)] 2026-04-07 01:21:59.202424 | orchestrator | ok 2026-04-07 01:21:59.212069 | 2026-04-07 01:21:59.212251 | TASK [Set cloud fact (local deployment)] 2026-04-07 01:21:59.247130 | orchestrator | skipping: Conditional result was False 2026-04-07 01:21:59.272154 | 2026-04-07 01:21:59.272315 | TASK [Clean the cloud environment] 2026-04-07 01:22:00.498968 | orchestrator | 2026-04-07 01:22:00 - clean up servers 2026-04-07 01:22:00.988535 | orchestrator | 2026-04-07 01:22:00 - clean up keypairs 2026-04-07 01:22:01.012203 | orchestrator | 2026-04-07 01:22:01 - wait for servers to be gone 2026-04-07 01:22:01.064889 | orchestrator | 2026-04-07 01:22:01 - clean up ports 2026-04-07 01:22:01.144259 | orchestrator | 2026-04-07 01:22:01 - clean up volumes 2026-04-07 01:22:01.220688 | orchestrator | 2026-04-07 01:22:01 - disconnect routers 2026-04-07 01:22:01.253138 | orchestrator | 2026-04-07 01:22:01 - clean up subnets 2026-04-07 01:22:01.278672 | orchestrator | 2026-04-07 01:22:01 - clean up networks 2026-04-07 01:22:01.432968 | orchestrator | 2026-04-07 01:22:01 - clean up security groups 2026-04-07 01:22:01.477993 | orchestrator | 2026-04-07 01:22:01 - clean up floating ips 2026-04-07 01:22:01.504710 | orchestrator | 2026-04-07 01:22:01 - clean up routers 2026-04-07 01:22:01.811737 | orchestrator | ok: Runtime: 0:00:01.455930 2026-04-07 01:22:01.815957 | 2026-04-07 01:22:01.816118 | PLAY RECAP 2026-04-07 01:22:01.816240 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-07 01:22:01.816302 | 2026-04-07 01:22:01.949969 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-07 01:22:01.955082 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-07 01:22:02.729401 | 2026-04-07 01:22:02.729586 | PLAY [Base post-fetch] 2026-04-07 01:22:02.759260 | 2026-04-07 01:22:02.759424 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-07 01:22:02.815400 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:02.830525 | 2026-04-07 01:22:02.830733 | TASK [fetch-output : Set log path for single node] 2026-04-07 01:22:02.890128 | orchestrator | ok 2026-04-07 01:22:02.899650 | 2026-04-07 01:22:02.899793 | LOOP [fetch-output : Ensure local output dirs] 2026-04-07 01:22:03.387485 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/logs" 2026-04-07 01:22:03.684112 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/artifacts" 2026-04-07 01:22:03.958534 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f25c0f5a182f4bef9ec88cb98187e293/work/docs" 2026-04-07 01:22:03.987877 | 2026-04-07 01:22:03.988062 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-07 01:22:04.936898 | orchestrator | changed: .d..t...... ./ 2026-04-07 01:22:04.937152 | orchestrator | changed: All items complete 2026-04-07 01:22:04.937238 | 2026-04-07 01:22:05.668446 | orchestrator | changed: .d..t...... ./ 2026-04-07 01:22:06.405538 | orchestrator | changed: .d..t...... ./ 2026-04-07 01:22:06.426503 | 2026-04-07 01:22:06.426695 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-07 01:22:06.464095 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:06.470310 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:06.479412 | 2026-04-07 01:22:06.479503 | PLAY RECAP 2026-04-07 01:22:06.479557 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-07 01:22:06.479619 | 2026-04-07 01:22:06.612159 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-07 01:22:06.613619 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 01:22:07.349903 | 2026-04-07 01:22:07.350059 | PLAY [Base post] 2026-04-07 01:22:07.364239 | 2026-04-07 01:22:07.364379 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-07 01:22:08.384053 | orchestrator | changed 2026-04-07 01:22:08.394202 | 2026-04-07 01:22:08.394335 | PLAY RECAP 2026-04-07 01:22:08.394412 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 01:22:08.394491 | 2026-04-07 01:22:08.513039 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 01:22:08.515686 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-07 01:22:09.309022 | 2026-04-07 01:22:09.309191 | PLAY [Base post-logs] 2026-04-07 01:22:09.319949 | 2026-04-07 01:22:09.320090 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-07 01:22:09.772290 | localhost | changed 2026-04-07 01:22:09.782339 | 2026-04-07 01:22:09.782486 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-07 01:22:09.820912 | localhost | ok 2026-04-07 01:22:09.828100 | 2026-04-07 01:22:09.828272 | TASK [Set zuul-log-path fact] 2026-04-07 01:22:09.846001 | localhost | ok 2026-04-07 01:22:09.858868 | 2026-04-07 01:22:09.858995 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 01:22:09.885392 | localhost | ok 2026-04-07 01:22:09.890356 | 2026-04-07 01:22:09.890503 | TASK [upload-logs : Create log directories] 2026-04-07 01:22:10.422899 | localhost | changed 2026-04-07 01:22:10.428155 | 2026-04-07 01:22:10.428355 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-07 01:22:10.973378 | localhost -> localhost | ok: Runtime: 0:00:00.007641 2026-04-07 01:22:10.977823 | 2026-04-07 01:22:10.977943 | TASK [upload-logs : Upload logs to log server] 2026-04-07 01:22:11.536036 | localhost | Output suppressed because no_log was given 2026-04-07 01:22:11.540356 | 2026-04-07 01:22:11.540546 | LOOP [upload-logs : Compress console log and json output] 2026-04-07 01:22:11.597273 | localhost | skipping: Conditional result was False 2026-04-07 01:22:11.613779 | localhost | skipping: Conditional result was False 2026-04-07 01:22:11.620849 | 2026-04-07 01:22:11.620981 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-07 01:22:11.675974 | localhost | skipping: Conditional result was False 2026-04-07 01:22:11.676360 | 2026-04-07 01:22:11.680907 | localhost | skipping: Conditional result was False 2026-04-07 01:22:11.692557 | 2026-04-07 01:22:11.692815 | LOOP [upload-logs : Upload console log and json output]