2026-01-07 00:00:07.378470 | Job console starting 2026-01-07 00:00:07.404945 | Updating git repos 2026-01-07 00:00:07.652658 | Cloning repos into workspace 2026-01-07 00:00:08.088423 | Restoring repo states 2026-01-07 00:00:08.130145 | Merging changes 2026-01-07 00:00:08.130198 | Checking out repos 2026-01-07 00:00:08.766166 | Preparing playbooks 2026-01-07 00:00:09.856367 | Running Ansible setup 2026-01-07 00:00:18.075360 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-07 00:00:20.546094 | 2026-01-07 00:00:20.546312 | PLAY [Base pre] 2026-01-07 00:00:20.650539 | 2026-01-07 00:00:20.651011 | TASK [Setup log path fact] 2026-01-07 00:00:20.715940 | orchestrator | ok 2026-01-07 00:00:20.787504 | 2026-01-07 00:00:20.787756 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 00:00:20.871361 | orchestrator | ok 2026-01-07 00:00:20.903257 | 2026-01-07 00:00:20.903410 | TASK [emit-job-header : Print job information] 2026-01-07 00:00:21.047202 | # Job Information 2026-01-07 00:00:21.047521 | Ansible Version: 2.16.14 2026-01-07 00:00:21.047564 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-07 00:00:21.047615 | Pipeline: periodic-midnight 2026-01-07 00:00:21.047644 | Executor: 521e9411259a 2026-01-07 00:00:21.047666 | Triggered by: https://github.com/osism/testbed 2026-01-07 00:00:21.047688 | Event ID: 461ce70bf2dc497f9380b0f2b29a549d 2026-01-07 00:00:21.065760 | 2026-01-07 00:00:21.065911 | LOOP [emit-job-header : Print node information] 2026-01-07 00:00:21.364724 | orchestrator | ok: 2026-01-07 00:00:21.365027 | orchestrator | # Node Information 2026-01-07 00:00:21.365064 | orchestrator | Inventory Hostname: orchestrator 2026-01-07 00:00:21.365089 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-07 00:00:21.365112 | orchestrator | Username: zuul-testbed05 2026-01-07 00:00:21.365133 | orchestrator | Distro: Debian 12.12 2026-01-07 00:00:21.365158 | orchestrator | Provider: static-testbed 2026-01-07 00:00:21.365209 | orchestrator | Region: 2026-01-07 00:00:21.365243 | orchestrator | Label: testbed-orchestrator 2026-01-07 00:00:21.365265 | orchestrator | Product Name: OpenStack Nova 2026-01-07 00:00:21.365284 | orchestrator | Interface IP: 81.163.193.140 2026-01-07 00:00:21.415713 | 2026-01-07 00:00:21.415870 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:22.805450 | orchestrator -> localhost | changed 2026-01-07 00:00:22.826307 | 2026-01-07 00:00:22.826455 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-07 00:00:26.636935 | orchestrator -> localhost | changed 2026-01-07 00:00:26.678068 | 2026-01-07 00:00:26.678720 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-07 00:00:28.136034 | orchestrator -> localhost | ok 2026-01-07 00:00:28.146128 | 2026-01-07 00:00:28.146305 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-07 00:00:28.253559 | orchestrator | ok 2026-01-07 00:00:28.320611 | orchestrator | included: /var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-07 00:00:28.368049 | 2026-01-07 00:00:28.368244 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-07 00:00:32.096774 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-07 00:00:32.097118 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/c61f2e5dbb054357b308a9fc4c27d52b_id_rsa 2026-01-07 00:00:32.097180 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/c61f2e5dbb054357b308a9fc4c27d52b_id_rsa.pub 2026-01-07 00:00:32.097208 | orchestrator -> localhost | The key fingerprint is: 2026-01-07 00:00:32.097236 | orchestrator -> localhost | SHA256:uTN5+f0RRfhvJKcL8f5clnNoIHJxRxsv55EbZzZviZg zuul-build-sshkey 2026-01-07 00:00:32.097258 | orchestrator -> localhost | The key's randomart image is: 2026-01-07 00:00:32.097293 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-07 00:00:32.097316 | orchestrator -> localhost | | o..| 2026-01-07 00:00:32.097338 | orchestrator -> localhost | | ..=.| 2026-01-07 00:00:32.097358 | orchestrator -> localhost | | . . +*B| 2026-01-07 00:00:32.097378 | orchestrator -> localhost | | . o+.o*&| 2026-01-07 00:00:32.097398 | orchestrator -> localhost | | S oE.+ O=| 2026-01-07 00:00:32.097422 | orchestrator -> localhost | | = o..oo=| 2026-01-07 00:00:32.097443 | orchestrator -> localhost | | = o oo*+| 2026-01-07 00:00:32.097463 | orchestrator -> localhost | | + . ooo=| 2026-01-07 00:00:32.097484 | orchestrator -> localhost | | . .o+| 2026-01-07 00:00:32.097504 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-07 00:00:32.097579 | orchestrator -> localhost | ok: Runtime: 0:00:01.615158 2026-01-07 00:00:32.110056 | 2026-01-07 00:00:32.110255 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-07 00:00:32.175248 | orchestrator | ok 2026-01-07 00:00:32.218453 | orchestrator | included: /var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-07 00:00:32.281949 | 2026-01-07 00:00:32.282105 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-07 00:00:32.361430 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:32.376689 | 2026-01-07 00:00:32.376913 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-07 00:00:33.786498 | orchestrator | changed 2026-01-07 00:00:33.809815 | 2026-01-07 00:00:33.817323 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-07 00:00:34.144644 | orchestrator | ok 2026-01-07 00:00:34.157970 | 2026-01-07 00:00:34.158121 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-07 00:00:34.814284 | orchestrator | ok 2026-01-07 00:00:34.830069 | 2026-01-07 00:00:34.830280 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-07 00:00:35.409324 | orchestrator | ok 2026-01-07 00:00:35.424335 | 2026-01-07 00:00:35.424494 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-07 00:00:35.459197 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:35.468388 | 2026-01-07 00:00:35.468539 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-07 00:00:38.164271 | orchestrator -> localhost | changed 2026-01-07 00:00:38.233699 | 2026-01-07 00:00:38.233879 | TASK [add-build-sshkey : Add back temp key] 2026-01-07 00:00:39.355042 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/c61f2e5dbb054357b308a9fc4c27d52b_id_rsa (zuul-build-sshkey) 2026-01-07 00:00:39.355398 | orchestrator -> localhost | ok: Runtime: 0:00:00.069650 2026-01-07 00:00:39.366785 | 2026-01-07 00:00:39.366964 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-07 00:00:40.210393 | orchestrator | ok 2026-01-07 00:00:40.235756 | 2026-01-07 00:00:40.237142 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-07 00:00:40.319868 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:40.560599 | 2026-01-07 00:00:40.560752 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-07 00:00:41.272274 | orchestrator | ok 2026-01-07 00:00:41.335300 | 2026-01-07 00:00:41.336550 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-07 00:00:41.439443 | orchestrator | ok 2026-01-07 00:00:41.468784 | 2026-01-07 00:00:41.468962 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:42.637879 | orchestrator -> localhost | ok 2026-01-07 00:00:42.646090 | 2026-01-07 00:00:42.646345 | TASK [validate-host : Collect information about the host] 2026-01-07 00:00:44.981022 | orchestrator | ok 2026-01-07 00:00:45.082551 | 2026-01-07 00:00:45.082722 | TASK [validate-host : Sanitize hostname] 2026-01-07 00:00:45.337609 | orchestrator | ok 2026-01-07 00:00:45.359238 | 2026-01-07 00:00:45.359547 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-07 00:00:47.811328 | orchestrator -> localhost | changed 2026-01-07 00:00:47.818346 | 2026-01-07 00:00:47.818483 | TASK [validate-host : Collect information about zuul worker] 2026-01-07 00:00:48.655457 | orchestrator | ok 2026-01-07 00:00:48.666359 | 2026-01-07 00:00:48.666583 | TASK [validate-host : Write out all zuul information for each host] 2026-01-07 00:00:50.730546 | orchestrator -> localhost | changed 2026-01-07 00:00:50.752332 | 2026-01-07 00:00:50.752475 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-07 00:00:51.049899 | orchestrator | ok 2026-01-07 00:00:51.065639 | 2026-01-07 00:00:51.065802 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-07 00:02:13.165621 | orchestrator | changed: 2026-01-07 00:02:13.165944 | orchestrator | .d..t...... src/ 2026-01-07 00:02:13.165986 | orchestrator | .d..t...... src/github.com/ 2026-01-07 00:02:13.166012 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-07 00:02:13.166034 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-07 00:02:13.166055 | orchestrator | RedHat.yml 2026-01-07 00:02:13.242747 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-07 00:02:13.242765 | orchestrator | RedHat.yml 2026-01-07 00:02:13.242817 | orchestrator | = 1.53.0"... 2026-01-07 00:02:31.917724 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-07 00:02:32.076831 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-07 00:02:34.970655 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:35.039422 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-07 00:02:36.246196 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-07 00:02:36.318086 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-07 00:02:36.829762 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:36.829853 | orchestrator | 2026-01-07 00:02:36.829860 | orchestrator | Providers are signed by their developers. 2026-01-07 00:02:36.829865 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-07 00:02:36.829914 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-07 00:02:36.829953 | orchestrator | 2026-01-07 00:02:36.829959 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-07 00:02:36.829964 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-07 00:02:36.829982 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-07 00:02:36.829993 | orchestrator | you run "tofu init" in the future. 2026-01-07 00:02:36.830468 | orchestrator | 2026-01-07 00:02:36.830511 | orchestrator | OpenTofu has been successfully initialized! 2026-01-07 00:02:36.830539 | orchestrator | 2026-01-07 00:02:36.830544 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-07 00:02:36.830549 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-07 00:02:36.830553 | orchestrator | should now work. 2026-01-07 00:02:36.830557 | orchestrator | 2026-01-07 00:02:36.830561 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-07 00:02:36.830566 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-07 00:02:36.830577 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-07 00:02:37.022704 | orchestrator | Created and switched to workspace "ci"! 2026-01-07 00:02:37.022773 | orchestrator | 2026-01-07 00:02:37.022780 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-07 00:02:37.022786 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-07 00:02:37.022812 | orchestrator | for this configuration. 2026-01-07 00:02:42.232726 | orchestrator | ci.auto.tfvars 2026-01-07 00:02:42.240630 | orchestrator | default_custom.tf 2026-01-07 00:02:44.762084 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-07 00:02:45.341503 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-07 00:02:45.668188 | orchestrator | 2026-01-07 00:02:46.197061 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-07 00:02:46.197149 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-07 00:02:46.197166 | orchestrator | + create 2026-01-07 00:02:46.197217 | orchestrator | <= read (data resources) 2026-01-07 00:02:46.197231 | orchestrator | 2026-01-07 00:02:46.197243 | orchestrator | OpenTofu will perform the following actions: 2026-01-07 00:02:46.197255 | orchestrator | 2026-01-07 00:02:46.197266 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-07 00:02:46.197278 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:46.197289 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-07 00:02:46.197301 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:46.197312 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:46.197323 | orchestrator | + file = (known after apply) 2026-01-07 00:02:46.197334 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.197375 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.197387 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:46.197398 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:46.197409 | orchestrator | + most_recent = true 2026-01-07 00:02:46.197421 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.197432 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:46.197443 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.197458 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:46.197470 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:46.197481 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:46.197492 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:46.197502 | orchestrator | } 2026-01-07 00:02:46.197514 | orchestrator | 2026-01-07 00:02:46.197525 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-07 00:02:46.197536 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:46.197547 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-07 00:02:46.197558 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:46.197569 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:46.197580 | orchestrator | + file = (known after apply) 2026-01-07 00:02:46.197590 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.197601 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.197612 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:46.197622 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:46.197633 | orchestrator | + most_recent = true 2026-01-07 00:02:46.197644 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.197654 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:46.197665 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.197676 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:46.197687 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:46.197698 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:46.197708 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:46.197719 | orchestrator | } 2026-01-07 00:02:46.197730 | orchestrator | 2026-01-07 00:02:46.197740 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-07 00:02:46.197751 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-07 00:02:46.197762 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.197774 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.197785 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.197796 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.197806 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.197817 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.197828 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.197839 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.197849 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.197860 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-07 00:02:46.197871 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.197882 | orchestrator | } 2026-01-07 00:02:46.197914 | orchestrator | 2026-01-07 00:02:46.197926 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-07 00:02:46.197937 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-07 00:02:46.197948 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.197958 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.197969 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.197980 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.197991 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.198001 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.198012 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.198064 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.198076 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.198111 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-07 00:02:46.198122 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.198132 | orchestrator | } 2026-01-07 00:02:46.198143 | orchestrator | 2026-01-07 00:02:46.198163 | orchestrator | # local_file.inventory will be created 2026-01-07 00:02:46.198175 | orchestrator | + resource "local_file" "inventory" { 2026-01-07 00:02:46.198185 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.198196 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.198207 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.198217 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.198228 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.198239 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.198250 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.198261 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.198272 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.198283 | orchestrator | + filename = "inventory.ci" 2026-01-07 00:02:46.198294 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.198304 | orchestrator | } 2026-01-07 00:02:46.198315 | orchestrator | 2026-01-07 00:02:46.198326 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-07 00:02:46.198337 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-07 00:02:46.198409 | orchestrator | + content = (sensitive value) 2026-01-07 00:02:46.198420 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.198431 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.198442 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.198453 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.198464 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.198475 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.198486 | orchestrator | + directory_permission = "0700" 2026-01-07 00:02:46.198496 | orchestrator | + file_permission = "0600" 2026-01-07 00:02:46.198507 | orchestrator | + filename = ".id_rsa.ci" 2026-01-07 00:02:46.198531 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.198544 | orchestrator | } 2026-01-07 00:02:46.198563 | orchestrator | 2026-01-07 00:02:46.198582 | orchestrator | # null_resource.node_semaphore will be created 2026-01-07 00:02:46.198601 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-07 00:02:46.198618 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.198637 | orchestrator | } 2026-01-07 00:02:46.198656 | orchestrator | 2026-01-07 00:02:46.198674 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-07 00:02:46.198692 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-07 00:02:46.198710 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.198728 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.198747 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.198766 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.198784 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.198802 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-07 00:02:46.198820 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.198836 | orchestrator | + size = 80 2026-01-07 00:02:46.198855 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.198873 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.198928 | orchestrator | } 2026-01-07 00:02:46.198951 | orchestrator | 2026-01-07 00:02:46.198968 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-07 00:02:46.198985 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.199004 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.199023 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.199040 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.199074 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.199093 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.199112 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-07 00:02:46.199129 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.199149 | orchestrator | + size = 80 2026-01-07 00:02:46.199170 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.199188 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.199207 | orchestrator | } 2026-01-07 00:02:46.199229 | orchestrator | 2026-01-07 00:02:46.199247 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-07 00:02:46.199266 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.199284 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.199302 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.199321 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.199339 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.199359 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.199380 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-07 00:02:46.199400 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.199419 | orchestrator | + size = 80 2026-01-07 00:02:46.199438 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.199457 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.199475 | orchestrator | } 2026-01-07 00:02:46.199494 | orchestrator | 2026-01-07 00:02:46.199513 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-07 00:02:46.199532 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.199551 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.199570 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.199588 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.199607 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.199625 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.199642 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-07 00:02:46.199661 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.199681 | orchestrator | + size = 80 2026-01-07 00:02:46.199701 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.199719 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.199738 | orchestrator | } 2026-01-07 00:02:46.200025 | orchestrator | 2026-01-07 00:02:46.200045 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-07 00:02:46.200064 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.200083 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.200101 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.200119 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.200137 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.200156 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.200186 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-07 00:02:46.200206 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.200225 | orchestrator | + size = 80 2026-01-07 00:02:46.200328 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.200350 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.200368 | orchestrator | } 2026-01-07 00:02:46.200386 | orchestrator | 2026-01-07 00:02:46.200405 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-07 00:02:46.200424 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.200442 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.200460 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.200478 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.200510 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.200526 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.200542 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-07 00:02:46.200558 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.200574 | orchestrator | + size = 80 2026-01-07 00:02:46.200590 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.200607 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.200623 | orchestrator | } 2026-01-07 00:02:46.200639 | orchestrator | 2026-01-07 00:02:46.200655 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-07 00:02:46.200671 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.200687 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.200720 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.200737 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.200753 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.200769 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.200785 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-07 00:02:46.200800 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.200816 | orchestrator | + size = 80 2026-01-07 00:02:46.200833 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.200849 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.200865 | orchestrator | } 2026-01-07 00:02:46.200881 | orchestrator | 2026-01-07 00:02:46.200923 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-07 00:02:46.200943 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.200960 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.200976 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.200993 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.201009 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.201026 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-07 00:02:46.201042 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.201059 | orchestrator | + size = 20 2026-01-07 00:02:46.201076 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.201092 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.201108 | orchestrator | } 2026-01-07 00:02:46.201125 | orchestrator | 2026-01-07 00:02:46.201141 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-07 00:02:46.201157 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.201173 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.201189 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.201206 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.201223 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.201239 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-07 00:02:46.201255 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.201271 | orchestrator | + size = 20 2026-01-07 00:02:46.201287 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.201301 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.201316 | orchestrator | } 2026-01-07 00:02:46.201332 | orchestrator | 2026-01-07 00:02:46.201347 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-07 00:02:46.201361 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.201376 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.201391 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.201406 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.201422 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.201437 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-07 00:02:46.201452 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.201481 | orchestrator | + size = 20 2026-01-07 00:02:46.201498 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.201513 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.201529 | orchestrator | } 2026-01-07 00:02:46.201544 | orchestrator | 2026-01-07 00:02:46.201560 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-07 00:02:46.201576 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.201593 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.201609 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.201625 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.201641 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.201656 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-07 00:02:46.201673 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.201690 | orchestrator | + size = 20 2026-01-07 00:02:46.201707 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.201723 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.201739 | orchestrator | } 2026-01-07 00:02:46.201755 | orchestrator | 2026-01-07 00:02:46.201772 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-07 00:02:46.201788 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.201804 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.201821 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.201837 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.201853 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.201869 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-07 00:02:46.201885 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.201940 | orchestrator | + size = 20 2026-01-07 00:02:46.201957 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.201973 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.201989 | orchestrator | } 2026-01-07 00:02:46.202005 | orchestrator | 2026-01-07 00:02:46.202063 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-07 00:02:46.202080 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.202097 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.202113 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.202130 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.202147 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.202163 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-07 00:02:46.202180 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.202196 | orchestrator | + size = 20 2026-01-07 00:02:46.202212 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.202229 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.202245 | orchestrator | } 2026-01-07 00:02:46.202262 | orchestrator | 2026-01-07 00:02:46.202279 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-07 00:02:46.202296 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.202313 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.202329 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.202345 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.202360 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.202377 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-07 00:02:46.202411 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.202429 | orchestrator | + size = 20 2026-01-07 00:02:46.202445 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.202462 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.202479 | orchestrator | } 2026-01-07 00:02:46.202496 | orchestrator | 2026-01-07 00:02:46.202512 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-07 00:02:46.202529 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.202558 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.202573 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.202590 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.202605 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.202622 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-07 00:02:46.202639 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.202655 | orchestrator | + size = 20 2026-01-07 00:02:46.202672 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.202689 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.202801 | orchestrator | } 2026-01-07 00:02:46.202818 | orchestrator | 2026-01-07 00:02:46.202834 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-07 00:02:46.203015 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.203033 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.203049 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.203110 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.203128 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.203144 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-07 00:02:46.203202 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.203220 | orchestrator | + size = 20 2026-01-07 00:02:46.203237 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.203298 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.203316 | orchestrator | } 2026-01-07 00:02:46.203333 | orchestrator | 2026-01-07 00:02:46.203349 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-07 00:02:46.203411 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-07 00:02:46.203428 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.203444 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.203497 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.203510 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.203524 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.203537 | orchestrator | + config_drive = true 2026-01-07 00:02:46.203584 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.203598 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.203611 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-07 00:02:46.203624 | orchestrator | + force_delete = false 2026-01-07 00:02:46.203671 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.203687 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.203700 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.203713 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.203726 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.203774 | orchestrator | + name = "testbed-manager" 2026-01-07 00:02:46.203787 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.203800 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.203814 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.203862 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.203875 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.203888 | orchestrator | + user_data = (sensitive value) 2026-01-07 00:02:46.203957 | orchestrator | 2026-01-07 00:02:46.203971 | orchestrator | + block_device { 2026-01-07 00:02:46.203984 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.203997 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.204060 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.204076 | orchestrator | + multiattach = false 2026-01-07 00:02:46.204091 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.204145 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.204170 | orchestrator | } 2026-01-07 00:02:46.204183 | orchestrator | 2026-01-07 00:02:46.204232 | orchestrator | + network { 2026-01-07 00:02:46.204246 | orchestrator | + access_network = false 2026-01-07 00:02:46.204260 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.204273 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.204324 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.204338 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.204351 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.204364 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.204413 | orchestrator | } 2026-01-07 00:02:46.204427 | orchestrator | } 2026-01-07 00:02:46.204440 | orchestrator | 2026-01-07 00:02:46.204453 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-07 00:02:46.204502 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.204516 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.204529 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.204542 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.204590 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.204603 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.204616 | orchestrator | + config_drive = true 2026-01-07 00:02:46.204630 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.204680 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.204693 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.204707 | orchestrator | + force_delete = false 2026-01-07 00:02:46.204720 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.204767 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.204782 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.204795 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.204807 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.204820 | orchestrator | + name = "testbed-node-0" 2026-01-07 00:02:46.204868 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.204881 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.204912 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.204960 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.204988 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.205002 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.205050 | orchestrator | 2026-01-07 00:02:46.205063 | orchestrator | + block_device { 2026-01-07 00:02:46.205076 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.205089 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.205135 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.205148 | orchestrator | + multiattach = false 2026-01-07 00:02:46.205161 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.205174 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.205187 | orchestrator | } 2026-01-07 00:02:46.205235 | orchestrator | 2026-01-07 00:02:46.205248 | orchestrator | + network { 2026-01-07 00:02:46.205261 | orchestrator | + access_network = false 2026-01-07 00:02:46.205274 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.205320 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.205334 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.205347 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.205360 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.205373 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.205421 | orchestrator | } 2026-01-07 00:02:46.205434 | orchestrator | } 2026-01-07 00:02:46.205447 | orchestrator | 2026-01-07 00:02:46.205460 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-07 00:02:46.205623 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.205643 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.205668 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.205716 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.205731 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.205744 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.205757 | orchestrator | + config_drive = true 2026-01-07 00:02:46.205804 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.205820 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.205833 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.205846 | orchestrator | + force_delete = false 2026-01-07 00:02:46.205859 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.205994 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.206009 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.206270 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.206285 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.206296 | orchestrator | + name = "testbed-node-1" 2026-01-07 00:02:46.206308 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.206360 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.206374 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.206385 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.206397 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.206409 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.206510 | orchestrator | 2026-01-07 00:02:46.207445 | orchestrator | + block_device { 2026-01-07 00:02:46.207508 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.207522 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.207534 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.207545 | orchestrator | + multiattach = false 2026-01-07 00:02:46.207599 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.207612 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.207624 | orchestrator | } 2026-01-07 00:02:46.207636 | orchestrator | 2026-01-07 00:02:46.207689 | orchestrator | + network { 2026-01-07 00:02:46.207702 | orchestrator | + access_network = false 2026-01-07 00:02:46.207714 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.207727 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.207780 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.207792 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.207804 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.207815 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.207865 | orchestrator | } 2026-01-07 00:02:46.207879 | orchestrator | } 2026-01-07 00:02:46.207911 | orchestrator | 2026-01-07 00:02:46.207963 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-07 00:02:46.207976 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.207988 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.207999 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.208055 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.208068 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.208094 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.208145 | orchestrator | + config_drive = true 2026-01-07 00:02:46.208157 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.208169 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.208181 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.208192 | orchestrator | + force_delete = false 2026-01-07 00:02:46.208244 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.208256 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.208268 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.208334 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.208347 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.208358 | orchestrator | + name = "testbed-node-2" 2026-01-07 00:02:46.208370 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.208382 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.208434 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.208446 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.208458 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.208469 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.208510 | orchestrator | 2026-01-07 00:02:46.208522 | orchestrator | + block_device { 2026-01-07 00:02:46.208533 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.208545 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.208598 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.208610 | orchestrator | + multiattach = false 2026-01-07 00:02:46.208622 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.208633 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.208679 | orchestrator | } 2026-01-07 00:02:46.208691 | orchestrator | 2026-01-07 00:02:46.208703 | orchestrator | + network { 2026-01-07 00:02:46.208755 | orchestrator | + access_network = false 2026-01-07 00:02:46.208768 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.208780 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.208832 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.208844 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.208856 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.208867 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.208879 | orchestrator | } 2026-01-07 00:02:46.208938 | orchestrator | } 2026-01-07 00:02:46.208950 | orchestrator | 2026-01-07 00:02:46.208962 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-07 00:02:46.208974 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.209017 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.209030 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.209042 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.209053 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.209065 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.209108 | orchestrator | + config_drive = true 2026-01-07 00:02:46.209120 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.209131 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.209143 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.209154 | orchestrator | + force_delete = false 2026-01-07 00:02:46.209207 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.209219 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.209230 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.209242 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.209285 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.209297 | orchestrator | + name = "testbed-node-3" 2026-01-07 00:02:46.209309 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.209320 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.209331 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.209376 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.209388 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.209400 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.209411 | orchestrator | 2026-01-07 00:02:46.209455 | orchestrator | + block_device { 2026-01-07 00:02:46.209474 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.209486 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.209497 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.209552 | orchestrator | + multiattach = false 2026-01-07 00:02:46.209565 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.209577 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.209589 | orchestrator | } 2026-01-07 00:02:46.209633 | orchestrator | 2026-01-07 00:02:46.209646 | orchestrator | + network { 2026-01-07 00:02:46.209658 | orchestrator | + access_network = false 2026-01-07 00:02:46.209669 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.209681 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.209725 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.209737 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.209749 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.209760 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.209772 | orchestrator | } 2026-01-07 00:02:46.209816 | orchestrator | } 2026-01-07 00:02:46.209828 | orchestrator | 2026-01-07 00:02:46.209840 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-07 00:02:46.209853 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.209918 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.209931 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.209943 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.209954 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.209992 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.210003 | orchestrator | + config_drive = true 2026-01-07 00:02:46.210051 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.210067 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.210079 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.210089 | orchestrator | + force_delete = false 2026-01-07 00:02:46.210101 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.210142 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.210154 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.210165 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.210176 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.210187 | orchestrator | + name = "testbed-node-4" 2026-01-07 00:02:46.210199 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.210210 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.210221 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.210232 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.210243 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.210254 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.210266 | orchestrator | 2026-01-07 00:02:46.210277 | orchestrator | + block_device { 2026-01-07 00:02:46.210288 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.210299 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.210310 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.210321 | orchestrator | + multiattach = false 2026-01-07 00:02:46.210333 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.210344 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.210356 | orchestrator | } 2026-01-07 00:02:46.210367 | orchestrator | 2026-01-07 00:02:46.210379 | orchestrator | + network { 2026-01-07 00:02:46.210390 | orchestrator | + access_network = false 2026-01-07 00:02:46.210402 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.210413 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.210425 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.210437 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.210448 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.210460 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.210472 | orchestrator | } 2026-01-07 00:02:46.210483 | orchestrator | } 2026-01-07 00:02:46.210509 | orchestrator | 2026-01-07 00:02:46.210521 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-07 00:02:46.210543 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.210556 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.210568 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.210579 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.210591 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.210602 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.210613 | orchestrator | + config_drive = true 2026-01-07 00:02:46.210625 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.210636 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.210648 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.210659 | orchestrator | + force_delete = false 2026-01-07 00:02:46.210676 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.210688 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.210699 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.210711 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.210723 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.210734 | orchestrator | + name = "testbed-node-5" 2026-01-07 00:02:46.210746 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.210757 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.210769 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.210779 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.210790 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.210802 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.210814 | orchestrator | 2026-01-07 00:02:46.210826 | orchestrator | + block_device { 2026-01-07 00:02:46.210838 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.210849 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.210860 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.210872 | orchestrator | + multiattach = false 2026-01-07 00:02:46.210883 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.210947 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.210960 | orchestrator | } 2026-01-07 00:02:46.210972 | orchestrator | 2026-01-07 00:02:46.210983 | orchestrator | + network { 2026-01-07 00:02:46.210994 | orchestrator | + access_network = false 2026-01-07 00:02:46.211005 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.211016 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.211026 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.211037 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.211048 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.211059 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.211071 | orchestrator | } 2026-01-07 00:02:46.211082 | orchestrator | } 2026-01-07 00:02:46.211094 | orchestrator | 2026-01-07 00:02:46.211106 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-07 00:02:46.211117 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-07 00:02:46.211129 | orchestrator | + fingerprint = (known after apply) 2026-01-07 00:02:46.211140 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211152 | orchestrator | + name = "testbed" 2026-01-07 00:02:46.211163 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:46.211175 | orchestrator | + public_key = (known after apply) 2026-01-07 00:02:46.211186 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211197 | orchestrator | + user_id = (known after apply) 2026-01-07 00:02:46.211209 | orchestrator | } 2026-01-07 00:02:46.211220 | orchestrator | 2026-01-07 00:02:46.211231 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-07 00:02:46.211281 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211305 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211316 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211328 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211340 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211351 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211363 | orchestrator | } 2026-01-07 00:02:46.211375 | orchestrator | 2026-01-07 00:02:46.211387 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-07 00:02:46.211399 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211410 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211421 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211432 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211443 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211454 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211464 | orchestrator | } 2026-01-07 00:02:46.211475 | orchestrator | 2026-01-07 00:02:46.211486 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-07 00:02:46.211497 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211508 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211519 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211530 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211541 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211551 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211562 | orchestrator | } 2026-01-07 00:02:46.211573 | orchestrator | 2026-01-07 00:02:46.211584 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-07 00:02:46.211595 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211644 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211654 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211664 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211674 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211684 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211695 | orchestrator | } 2026-01-07 00:02:46.211706 | orchestrator | 2026-01-07 00:02:46.211716 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-07 00:02:46.211726 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211737 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211747 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211757 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211774 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211785 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211795 | orchestrator | } 2026-01-07 00:02:46.211805 | orchestrator | 2026-01-07 00:02:46.211816 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-07 00:02:46.211837 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211847 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211858 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211868 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211878 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.211889 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.211917 | orchestrator | } 2026-01-07 00:02:46.211928 | orchestrator | 2026-01-07 00:02:46.211939 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-07 00:02:46.211949 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.211960 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.211970 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.211981 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.211991 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.212008 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.212018 | orchestrator | } 2026-01-07 00:02:46.212029 | orchestrator | 2026-01-07 00:02:46.212040 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-07 00:02:46.212051 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.212061 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.212072 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.212082 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.212092 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.212103 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.212113 | orchestrator | } 2026-01-07 00:02:46.212123 | orchestrator | 2026-01-07 00:02:46.212134 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-07 00:02:46.212144 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.212155 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.212165 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.212175 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.212185 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.212196 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.212206 | orchestrator | } 2026-01-07 00:02:46.212217 | orchestrator | 2026-01-07 00:02:46.212285 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-07 00:02:46.212303 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-07 00:02:46.212380 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:46.212407 | orchestrator | + floating_ip = (known after apply) 2026-01-07 00:02:46.212466 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.212478 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.212498 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.212550 | orchestrator | } 2026-01-07 00:02:46.212564 | orchestrator | 2026-01-07 00:02:46.212620 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-07 00:02:46.212648 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-07 00:02:46.212668 | orchestrator | + address = (known after apply) 2026-01-07 00:02:46.212709 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.212738 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:46.212799 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.212827 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:46.212838 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.212849 | orchestrator | + pool = "public" 2026-01-07 00:02:46.212907 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.212920 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.212930 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.212971 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.212984 | orchestrator | } 2026-01-07 00:02:46.212995 | orchestrator | 2026-01-07 00:02:46.213005 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-07 00:02:46.213016 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-07 00:02:46.213060 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.213074 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.213084 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:46.213095 | orchestrator | + "nova", 2026-01-07 00:02:46.213105 | orchestrator | ] 2026-01-07 00:02:46.213145 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:46.213158 | orchestrator | + external = (known after apply) 2026-01-07 00:02:46.213169 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.213179 | orchestrator | + mtu = (known after apply) 2026-01-07 00:02:46.213190 | orchestrator | + name = "net-testbed-management" 2026-01-07 00:02:46.213201 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.213253 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.213263 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.213274 | orchestrator | + shared = (known after apply) 2026-01-07 00:02:46.213284 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.213324 | orchestrator | + transparent_vlan = (known after apply) 2026-01-07 00:02:46.213335 | orchestrator | 2026-01-07 00:02:46.213345 | orchestrator | + segments (known after apply) 2026-01-07 00:02:46.213356 | orchestrator | } 2026-01-07 00:02:46.213367 | orchestrator | 2026-01-07 00:02:46.213407 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-07 00:02:46.213418 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-07 00:02:46.213429 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.213440 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.213450 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.213499 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.213512 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.213523 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.213535 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.213574 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.213586 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.213597 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.213607 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.213629 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.213668 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.213678 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.213689 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.213699 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.213709 | orchestrator | 2026-01-07 00:02:46.213749 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.213760 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.213770 | orchestrator | } 2026-01-07 00:02:46.213781 | orchestrator | 2026-01-07 00:02:46.213791 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.213824 | orchestrator | 2026-01-07 00:02:46.213835 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.213845 | orchestrator | + ip_address = "192.168.16.5" 2026-01-07 00:02:46.213855 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.213866 | orchestrator | } 2026-01-07 00:02:46.213951 | orchestrator | } 2026-01-07 00:02:46.213963 | orchestrator | 2026-01-07 00:02:46.213973 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-07 00:02:46.214007 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.214041 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.214051 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.214062 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.214130 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.214141 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.216463 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.216487 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.216497 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.216507 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.216516 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.216526 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.216536 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.216545 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.216555 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.216580 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.216589 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.216599 | orchestrator | 2026-01-07 00:02:46.216610 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.216620 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.216630 | orchestrator | } 2026-01-07 00:02:46.216640 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.216649 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.216659 | orchestrator | } 2026-01-07 00:02:46.216669 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.216678 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.216688 | orchestrator | } 2026-01-07 00:02:46.216698 | orchestrator | 2026-01-07 00:02:46.216707 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.216717 | orchestrator | 2026-01-07 00:02:46.216726 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.216736 | orchestrator | + ip_address = "192.168.16.10" 2026-01-07 00:02:46.216745 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.216755 | orchestrator | } 2026-01-07 00:02:46.216764 | orchestrator | } 2026-01-07 00:02:46.216774 | orchestrator | 2026-01-07 00:02:46.216783 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-07 00:02:46.216793 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.216803 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.216812 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.216822 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.216831 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.216841 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.216851 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.216860 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.216870 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.216879 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.216889 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.216950 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.216959 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.216969 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.216978 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.216988 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.216997 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.217006 | orchestrator | 2026-01-07 00:02:46.217016 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217025 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.217035 | orchestrator | } 2026-01-07 00:02:46.217044 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217053 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.217063 | orchestrator | } 2026-01-07 00:02:46.217072 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217082 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.217091 | orchestrator | } 2026-01-07 00:02:46.217101 | orchestrator | 2026-01-07 00:02:46.217110 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.217120 | orchestrator | 2026-01-07 00:02:46.217129 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.217138 | orchestrator | + ip_address = "192.168.16.11" 2026-01-07 00:02:46.217148 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.217157 | orchestrator | } 2026-01-07 00:02:46.217166 | orchestrator | } 2026-01-07 00:02:46.217176 | orchestrator | 2026-01-07 00:02:46.217185 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-07 00:02:46.217195 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.217204 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.217214 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.217223 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.217233 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.217250 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.217260 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.217269 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.217278 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.217296 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.217372 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.217382 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.217391 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.217414 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.217423 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.217433 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.217442 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.217452 | orchestrator | 2026-01-07 00:02:46.217461 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217471 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.217479 | orchestrator | } 2026-01-07 00:02:46.217488 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217496 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.217504 | orchestrator | } 2026-01-07 00:02:46.217513 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217521 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.217530 | orchestrator | } 2026-01-07 00:02:46.217538 | orchestrator | 2026-01-07 00:02:46.217546 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.217555 | orchestrator | 2026-01-07 00:02:46.217563 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.217571 | orchestrator | + ip_address = "192.168.16.12" 2026-01-07 00:02:46.217580 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.217588 | orchestrator | } 2026-01-07 00:02:46.217596 | orchestrator | } 2026-01-07 00:02:46.217605 | orchestrator | 2026-01-07 00:02:46.217613 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-07 00:02:46.217622 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.217630 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.217638 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.217646 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.217655 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.217663 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.217671 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.217679 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.217688 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.217696 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.217704 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.217713 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.217721 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.217730 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.217738 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.217746 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.217755 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.217763 | orchestrator | 2026-01-07 00:02:46.217771 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217780 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.217788 | orchestrator | } 2026-01-07 00:02:46.217797 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217805 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.217813 | orchestrator | } 2026-01-07 00:02:46.217822 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.217830 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.217838 | orchestrator | } 2026-01-07 00:02:46.217847 | orchestrator | 2026-01-07 00:02:46.217861 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.217870 | orchestrator | 2026-01-07 00:02:46.217878 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.217886 | orchestrator | + ip_address = "192.168.16.13" 2026-01-07 00:02:46.217909 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.217918 | orchestrator | } 2026-01-07 00:02:46.217926 | orchestrator | } 2026-01-07 00:02:46.217935 | orchestrator | 2026-01-07 00:02:46.217943 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-07 00:02:46.217952 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.217960 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.217969 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.227172 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.227187 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.227191 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.227196 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.227200 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.227204 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.227208 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227212 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.227216 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.227221 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.227224 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.227228 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227242 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.227246 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227252 | orchestrator | 2026-01-07 00:02:46.227257 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227261 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.227266 | orchestrator | } 2026-01-07 00:02:46.227270 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227274 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.227278 | orchestrator | } 2026-01-07 00:02:46.227282 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227286 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.227290 | orchestrator | } 2026-01-07 00:02:46.227293 | orchestrator | 2026-01-07 00:02:46.227298 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.227302 | orchestrator | 2026-01-07 00:02:46.227305 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.227309 | orchestrator | + ip_address = "192.168.16.14" 2026-01-07 00:02:46.227313 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.227317 | orchestrator | } 2026-01-07 00:02:46.227321 | orchestrator | } 2026-01-07 00:02:46.227325 | orchestrator | 2026-01-07 00:02:46.227329 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-07 00:02:46.227333 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.227337 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.227341 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.227344 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.227348 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.227352 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.227356 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.227360 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.227364 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.227367 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227384 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.227388 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.227392 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.227395 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.227410 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227414 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.227417 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227421 | orchestrator | 2026-01-07 00:02:46.227425 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227429 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.227433 | orchestrator | } 2026-01-07 00:02:46.227436 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227440 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.227444 | orchestrator | } 2026-01-07 00:02:46.227448 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.227452 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.227455 | orchestrator | } 2026-01-07 00:02:46.227459 | orchestrator | 2026-01-07 00:02:46.227470 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.227474 | orchestrator | 2026-01-07 00:02:46.227478 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.227482 | orchestrator | + ip_address = "192.168.16.15" 2026-01-07 00:02:46.227485 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.227489 | orchestrator | } 2026-01-07 00:02:46.227493 | orchestrator | } 2026-01-07 00:02:46.227497 | orchestrator | 2026-01-07 00:02:46.227501 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-07 00:02:46.227505 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-07 00:02:46.227508 | orchestrator | + force_destroy = false 2026-01-07 00:02:46.227512 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227516 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.227520 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227524 | orchestrator | + router_id = (known after apply) 2026-01-07 00:02:46.227528 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.227532 | orchestrator | } 2026-01-07 00:02:46.227535 | orchestrator | 2026-01-07 00:02:46.227539 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-07 00:02:46.227543 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-07 00:02:46.227547 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.227551 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.227555 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:46.227559 | orchestrator | + "nova", 2026-01-07 00:02:46.227562 | orchestrator | ] 2026-01-07 00:02:46.227566 | orchestrator | + distributed = (known after apply) 2026-01-07 00:02:46.227570 | orchestrator | + enable_snat = (known after apply) 2026-01-07 00:02:46.227574 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-07 00:02:46.227578 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-07 00:02:46.227582 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227585 | orchestrator | + name = "testbed" 2026-01-07 00:02:46.227589 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227593 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227597 | orchestrator | 2026-01-07 00:02:46.227601 | orchestrator | + external_fixed_ip (known after apply) 2026-01-07 00:02:46.227605 | orchestrator | } 2026-01-07 00:02:46.227608 | orchestrator | 2026-01-07 00:02:46.227612 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-07 00:02:46.227617 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-07 00:02:46.227621 | orchestrator | + description = "ssh" 2026-01-07 00:02:46.227625 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227629 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227632 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227637 | orchestrator | + port_range_max = 22 2026-01-07 00:02:46.227640 | orchestrator | + port_range_min = 22 2026-01-07 00:02:46.227644 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.227648 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227655 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.227659 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.227663 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.227667 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.227671 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227675 | orchestrator | } 2026-01-07 00:02:46.227678 | orchestrator | 2026-01-07 00:02:46.227682 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-07 00:02:46.227696 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-07 00:02:46.227700 | orchestrator | + description = "wireguard" 2026-01-07 00:02:46.227704 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227708 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227712 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227715 | orchestrator | + port_range_max = 51820 2026-01-07 00:02:46.227719 | orchestrator | + port_range_min = 51820 2026-01-07 00:02:46.227723 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.227727 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227731 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.227735 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.227738 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.227742 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.227746 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227750 | orchestrator | } 2026-01-07 00:02:46.227754 | orchestrator | 2026-01-07 00:02:46.227758 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-07 00:02:46.227762 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-07 00:02:46.227766 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227769 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227773 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227777 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.227781 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227785 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.227789 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.227796 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:46.227801 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.227804 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227808 | orchestrator | } 2026-01-07 00:02:46.227812 | orchestrator | 2026-01-07 00:02:46.227816 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-07 00:02:46.227820 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-07 00:02:46.227824 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227828 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227831 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227835 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.227839 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227843 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.227847 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.227850 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:46.227854 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.227858 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227862 | orchestrator | } 2026-01-07 00:02:46.227866 | orchestrator | 2026-01-07 00:02:46.227870 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-07 00:02:46.227877 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-07 00:02:46.227881 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227885 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227889 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227933 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:46.227936 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227940 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.227944 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.227948 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.227952 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.227955 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.227959 | orchestrator | } 2026-01-07 00:02:46.227963 | orchestrator | 2026-01-07 00:02:46.227967 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-07 00:02:46.227970 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-07 00:02:46.227974 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.227978 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.227982 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.227986 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.227989 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.227993 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.228004 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.228008 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.228012 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.228016 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228020 | orchestrator | } 2026-01-07 00:02:46.228023 | orchestrator | 2026-01-07 00:02:46.228027 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-07 00:02:46.228031 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-07 00:02:46.228035 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.228039 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.228042 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228046 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.228050 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228054 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.228058 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.228061 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.228065 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.228069 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228073 | orchestrator | } 2026-01-07 00:02:46.228077 | orchestrator | 2026-01-07 00:02:46.228080 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-07 00:02:46.228084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-07 00:02:46.228088 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.228094 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.228098 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228102 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:46.228106 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228109 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.228113 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.228117 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.228121 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.228125 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228132 | orchestrator | } 2026-01-07 00:02:46.228136 | orchestrator | 2026-01-07 00:02:46.228140 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-07 00:02:46.228144 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-07 00:02:46.228147 | orchestrator | + description = "vrrp" 2026-01-07 00:02:46.228151 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.228155 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.228159 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228163 | orchestrator | + protocol = "112" 2026-01-07 00:02:46.228166 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228170 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.228174 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.228181 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.228185 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.228189 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228193 | orchestrator | } 2026-01-07 00:02:46.228197 | orchestrator | 2026-01-07 00:02:46.228201 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-07 00:02:46.228205 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-07 00:02:46.228208 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.228212 | orchestrator | + description = "management security group" 2026-01-07 00:02:46.228216 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228220 | orchestrator | + name = "testbed-management" 2026-01-07 00:02:46.228223 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228227 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:46.228231 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228235 | orchestrator | } 2026-01-07 00:02:46.228239 | orchestrator | 2026-01-07 00:02:46.228242 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-07 00:02:46.228246 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-07 00:02:46.228250 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.228254 | orchestrator | + description = "node security group" 2026-01-07 00:02:46.228258 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228262 | orchestrator | + name = "testbed-node" 2026-01-07 00:02:46.228265 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228269 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:46.228273 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228277 | orchestrator | } 2026-01-07 00:02:46.228280 | orchestrator | 2026-01-07 00:02:46.228284 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-07 00:02:46.228288 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-07 00:02:46.228292 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.228296 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-07 00:02:46.228300 | orchestrator | + dns_nameservers = [ 2026-01-07 00:02:46.228303 | orchestrator | + "8.8.8.8", 2026-01-07 00:02:46.228307 | orchestrator | + "9.9.9.9", 2026-01-07 00:02:46.228311 | orchestrator | ] 2026-01-07 00:02:46.228315 | orchestrator | + enable_dhcp = true 2026-01-07 00:02:46.228319 | orchestrator | + gateway_ip = (known after apply) 2026-01-07 00:02:46.228323 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228327 | orchestrator | + ip_version = 4 2026-01-07 00:02:46.228330 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-07 00:02:46.228334 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-07 00:02:46.228338 | orchestrator | + name = "subnet-testbed-management" 2026-01-07 00:02:46.228342 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.228346 | orchestrator | + no_gateway = false 2026-01-07 00:02:46.228349 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.228353 | orchestrator | + service_types = (known after apply) 2026-01-07 00:02:46.228360 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.228364 | orchestrator | 2026-01-07 00:02:46.228368 | orchestrator | + allocation_pool { 2026-01-07 00:02:46.228372 | orchestrator | + end = "192.168.31.250" 2026-01-07 00:02:46.228376 | orchestrator | + start = "192.168.31.200" 2026-01-07 00:02:46.228380 | orchestrator | } 2026-01-07 00:02:46.228383 | orchestrator | } 2026-01-07 00:02:46.228387 | orchestrator | 2026-01-07 00:02:46.228391 | orchestrator | # terraform_data.image will be created 2026-01-07 00:02:46.228395 | orchestrator | + resource "terraform_data" "image" { 2026-01-07 00:02:46.228399 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228402 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:46.228406 | orchestrator | + output = (known after apply) 2026-01-07 00:02:46.228410 | orchestrator | } 2026-01-07 00:02:46.228414 | orchestrator | 2026-01-07 00:02:46.228418 | orchestrator | # terraform_data.image_node will be created 2026-01-07 00:02:46.228421 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-07 00:02:46.228425 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.228429 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:46.228433 | orchestrator | + output = (known after apply) 2026-01-07 00:02:46.228437 | orchestrator | } 2026-01-07 00:02:46.228440 | orchestrator | 2026-01-07 00:02:46.228444 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-07 00:02:46.228448 | orchestrator | 2026-01-07 00:02:46.228452 | orchestrator | Changes to Outputs: 2026-01-07 00:02:46.228456 | orchestrator | + manager_address = (sensitive value) 2026-01-07 00:02:46.228460 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:46.391794 | orchestrator | terraform_data.image: Creating... 2026-01-07 00:02:46.391861 | orchestrator | terraform_data.image: Creation complete after 0s [id=256ac0ee-2605-7f8d-a184-296b1d60ebcd] 2026-01-07 00:02:46.391868 | orchestrator | terraform_data.image_node: Creating... 2026-01-07 00:02:46.392998 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=5d6bf92f-33c8-2203-a4a6-653d57adb4f9] 2026-01-07 00:02:46.410447 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-07 00:02:46.418091 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-07 00:02:46.424425 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-07 00:02:46.424467 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-07 00:02:46.424472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-07 00:02:46.424477 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-07 00:02:46.426605 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-07 00:02:46.426624 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-07 00:02:46.426628 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-07 00:02:46.430066 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-07 00:02:46.917393 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:46.923975 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-07 00:02:46.929963 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-07 00:02:46.938826 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-07 00:02:47.061549 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:47.069304 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-07 00:02:47.612742 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=1ff682de-950c-4d69-bebc-f8cb4a96be2d] 2026-01-07 00:02:47.625233 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-07 00:02:50.031993 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=b31d70e3-b168-49a6-8859-8d7d4687e463] 2026-01-07 00:02:50.040948 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-07 00:02:50.055640 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=6ba210b4-a43a-450d-93ff-eb978033e3d5] 2026-01-07 00:02:50.061009 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-07 00:02:50.077407 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=259f5b3c-7b2e-4352-b31f-9bca396d8d3d] 2026-01-07 00:02:50.078003 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=fef6d06e-2e84-4523-b9f6-c646394c7616] 2026-01-07 00:02:50.087711 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-07 00:02:50.088602 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-07 00:02:50.111436 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=e64e84b9-7894-4a82-9b6d-98451d3876ac] 2026-01-07 00:02:50.115273 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-07 00:02:50.303621 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=e79c7a29-b83e-4f0d-b893-2f76efcc2de7] 2026-01-07 00:02:50.316247 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4e087c0c-4e3c-44c7-8e14-59e041e19843] 2026-01-07 00:02:50.329715 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-07 00:02:50.329812 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=a08497b0-f7e1-49b2-88eb-3502c1ea5c7e] 2026-01-07 00:02:50.329824 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-07 00:02:50.336467 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-07 00:02:50.350246 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=f5a0b4b8463e09027535ddf0a2d7e26b4067a0d0] 2026-01-07 00:02:50.352721 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-07 00:02:50.352746 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=93d32af34dd348b07c2ef455eab7f386b088ef45] 2026-01-07 00:02:50.371338 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=3408abb5-01eb-4a5b-916f-01f572b7843e] 2026-01-07 00:02:51.012877 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=1a8e12e9-2702-442e-8e8b-1bb37c249997] 2026-01-07 00:02:51.475162 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=8a48c9e5-a4f2-4094-8379-50667a5a4a70] 2026-01-07 00:02:51.484573 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-07 00:02:53.401634 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=82128b42-724c-4521-9e38-07aa1eb87990] 2026-01-07 00:02:53.457891 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=d30537dd-f05d-4658-af3c-1d08cd97752f] 2026-01-07 00:02:53.461538 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=fd5a3f23-fcfd-47ca-822c-e3718156259e] 2026-01-07 00:02:53.471267 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2bd50aac-7288-4b67-9b89-7e8f2f739bb4] 2026-01-07 00:02:53.543896 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=1d88365d-d1c8-462a-a122-3aa4d05825ad] 2026-01-07 00:02:53.761328 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=23f1264f-2652-4928-a25f-b9e67a9fda53] 2026-01-07 00:02:54.961277 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=7970584d-b79d-4dff-a322-73f17abc01f4] 2026-01-07 00:02:54.967388 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-07 00:02:54.968637 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-07 00:02:54.971432 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-07 00:02:55.185293 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d8401565-47f1-4b32-841b-19b72c716e21] 2026-01-07 00:02:55.198840 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-07 00:02:55.199559 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-07 00:02:55.199655 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-07 00:02:55.203091 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-07 00:02:55.203592 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-07 00:02:55.203710 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-07 00:02:55.257816 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=da3fa22b-5676-4547-a46d-2bd1dd721093] 2026-01-07 00:02:55.263973 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-07 00:02:55.266216 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-07 00:02:55.266756 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-07 00:02:55.480175 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=01897e9c-4cb2-4516-8c2c-2c4aef332d03] 2026-01-07 00:02:55.487014 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-07 00:02:55.691173 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=9157d7eb-c491-4813-9d80-f80b7394e598] 2026-01-07 00:02:55.702136 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-07 00:02:56.088044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=5d60e5c0-9a3f-42d8-b342-ba6937e163f7] 2026-01-07 00:02:56.105928 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-07 00:02:56.300635 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=cfd14098-d8d7-4970-afb8-6e15cb8c62d7] 2026-01-07 00:02:56.312519 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-07 00:02:56.640742 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=6775f034-016c-424e-bb82-8f871b20932e] 2026-01-07 00:02:56.654828 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-07 00:02:56.732709 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=f821dc04-6d4d-45d6-a4df-f15beb1078fe] 2026-01-07 00:02:56.747707 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-07 00:02:56.826608 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=33a01b7a-ddbe-4605-b56b-31cf1018caca] 2026-01-07 00:02:56.837046 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-07 00:02:57.008895 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=b51a637f-6413-466b-ae0e-415620da5781] 2026-01-07 00:02:57.153641 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=b1edb8a9-244c-4c66-82d1-c9cde39be9e3] 2026-01-07 00:02:57.407721 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=7d12bb3a-b391-44b6-acc3-b664d8173559] 2026-01-07 00:02:57.497970 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=1cbd185b-c110-4ed3-8d79-3e8f7bc76baa] 2026-01-07 00:02:57.722632 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 3s [id=d899f8fb-84b1-49d1-8e23-944901ed7f21] 2026-01-07 00:02:57.761152 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=2035d6d1-b9c7-47fe-86fa-bb0a81f9de86] 2026-01-07 00:02:57.826499 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=4ee82844-0e99-497c-bb55-1ccf7c9513ed] 2026-01-07 00:02:57.927672 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 3s [id=9630d381-52e6-43ac-8d46-07942a20215a] 2026-01-07 00:02:57.969473 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ef7803ef-13d3-4463-86e4-71bbf2d2edf0] 2026-01-07 00:03:00.965048 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=a8eae6da-0e54-4699-a0f5-648266c654ba] 2026-01-07 00:03:00.996174 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-07 00:03:00.999728 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-07 00:03:01.004609 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-07 00:03:01.008409 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-07 00:03:01.012084 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-07 00:03:01.019571 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-07 00:03:01.033178 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-07 00:03:03.125898 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=d796a580-2537-43d8-93f4-16cfc3e17c26] 2026-01-07 00:03:03.137684 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-07 00:03:03.140185 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-07 00:03:03.141579 | orchestrator | local_file.inventory: Creating... 2026-01-07 00:03:03.144640 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fcefe706c959a13117c049481ea83cda6e12f99c] 2026-01-07 00:03:03.145435 | orchestrator | local_file.inventory: Creation complete after 0s [id=7803af64aa968d83d92f91de29fbe37827dab8ae] 2026-01-07 00:03:04.287778 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d796a580-2537-43d8-93f4-16cfc3e17c26] 2026-01-07 00:03:11.005230 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-07 00:03:11.005366 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-07 00:03:11.013678 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-07 00:03:11.013734 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-07 00:03:11.021971 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-07 00:03:11.039390 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-07 00:03:21.013998 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-07 00:03:21.014197 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-07 00:03:21.014213 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-07 00:03:21.014239 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-07 00:03:21.022404 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-07 00:03:21.039626 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-07 00:03:31.014612 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-07 00:03:31.014756 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-07 00:03:31.014774 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-07 00:03:31.014786 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-07 00:03:31.023106 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-07 00:03:31.040718 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-07 00:03:32.389381 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=38b6f5fe-7420-4045-8d64-c42607a77072] 2026-01-07 00:03:41.022716 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-07 00:03:41.022809 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-07 00:03:41.022830 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-07 00:03:41.023891 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-07 00:03:41.041372 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-07 00:03:41.859319 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=382b4a51-da28-448f-b4ec-61c9d769854c] 2026-01-07 00:03:51.031672 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-07 00:03:51.031811 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-07 00:03:51.031820 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-07 00:03:51.042255 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-07 00:03:52.013432 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=8cc15f03-560f-447f-ad14-88ba6c9db6e6] 2026-01-07 00:03:52.168734 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=138b1311-7ae5-4bb9-8b38-86d8e7d87342] 2026-01-07 00:03:52.465755 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=c35dc3fb-abc8-41f8-a47a-137b9026ec8f] 2026-01-07 00:04:01.040118 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-01-07 00:04:02.199565 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=14a43a66-18c1-40d5-86e4-ce19ac4ad6f2] 2026-01-07 00:04:02.217125 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-07 00:04:02.238185 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-07 00:04:02.241769 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3064467684066756259] 2026-01-07 00:04:02.243504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-07 00:04:02.243551 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-07 00:04:02.243595 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-07 00:04:02.254114 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-07 00:04:02.268562 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-07 00:04:02.296614 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-07 00:04:02.303490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-07 00:04:02.335793 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-07 00:04:02.339243 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-07 00:04:05.890838 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=8cc15f03-560f-447f-ad14-88ba6c9db6e6/6ba210b4-a43a-450d-93ff-eb978033e3d5] 2026-01-07 00:04:05.918628 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=c35dc3fb-abc8-41f8-a47a-137b9026ec8f/a08497b0-f7e1-49b2-88eb-3502c1ea5c7e] 2026-01-07 00:04:06.087203 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=382b4a51-da28-448f-b4ec-61c9d769854c/e64e84b9-7894-4a82-9b6d-98451d3876ac] 2026-01-07 00:04:07.889420 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=c35dc3fb-abc8-41f8-a47a-137b9026ec8f/259f5b3c-7b2e-4352-b31f-9bca396d8d3d] 2026-01-07 00:04:12.171311 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=382b4a51-da28-448f-b4ec-61c9d769854c/3408abb5-01eb-4a5b-916f-01f572b7843e] 2026-01-07 00:04:12.182075 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=8cc15f03-560f-447f-ad14-88ba6c9db6e6/fef6d06e-2e84-4523-b9f6-c646394c7616] 2026-01-07 00:04:12.220770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=8cc15f03-560f-447f-ad14-88ba6c9db6e6/e79c7a29-b83e-4f0d-b893-2f76efcc2de7] 2026-01-07 00:04:12.235369 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=c35dc3fb-abc8-41f8-a47a-137b9026ec8f/4e087c0c-4e3c-44c7-8e14-59e041e19843] 2026-01-07 00:04:12.249595 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=382b4a51-da28-448f-b4ec-61c9d769854c/b31d70e3-b168-49a6-8859-8d7d4687e463] 2026-01-07 00:04:12.339125 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-07 00:04:22.339400 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-07 00:04:23.098410 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=e1c90135-ed14-4df7-b41a-dcd10472ec5b] 2026-01-07 00:04:23.118257 | orchestrator | 2026-01-07 00:04:23.118319 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-07 00:04:23.118328 | orchestrator | 2026-01-07 00:04:23.118334 | orchestrator | Outputs: 2026-01-07 00:04:23.118341 | orchestrator | 2026-01-07 00:04:23.118346 | orchestrator | manager_address = 2026-01-07 00:04:23.118352 | orchestrator | private_key = 2026-01-07 00:04:23.284948 | orchestrator | ok: Runtime: 0:01:56.574428 2026-01-07 00:04:23.320300 | 2026-01-07 00:04:23.320492 | TASK [Create infrastructure (stable)] 2026-01-07 00:04:23.857524 | orchestrator | skipping: Conditional result was False 2026-01-07 00:04:23.866718 | 2026-01-07 00:04:23.866899 | TASK [Fetch manager address] 2026-01-07 00:04:24.325610 | orchestrator | ok 2026-01-07 00:04:24.333638 | 2026-01-07 00:04:24.333786 | TASK [Set manager_host address] 2026-01-07 00:04:24.424781 | orchestrator | ok 2026-01-07 00:04:24.434900 | 2026-01-07 00:04:24.435047 | LOOP [Update ansible collections] 2026-01-07 00:04:27.002034 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:27.002732 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:27.002861 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:27.002920 | orchestrator | Process install dependency map 2026-01-07 00:04:27.002965 | orchestrator | Starting collection install process 2026-01-07 00:04:27.003009 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-01-07 00:04:27.003066 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-01-07 00:04:27.003160 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-07 00:04:27.003303 | orchestrator | ok: Item: commons Runtime: 0:00:02.180857 2026-01-07 00:04:28.217549 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:28.217691 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:28.217723 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:28.217745 | orchestrator | Process install dependency map 2026-01-07 00:04:28.217766 | orchestrator | Starting collection install process 2026-01-07 00:04:28.217786 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-01-07 00:04:28.217807 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-01-07 00:04:28.217826 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-07 00:04:28.217859 | orchestrator | ok: Item: services Runtime: 0:00:00.896421 2026-01-07 00:04:28.242902 | 2026-01-07 00:04:28.243160 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:04:38.870316 | orchestrator | ok 2026-01-07 00:04:38.879673 | 2026-01-07 00:04:38.879779 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:05:38.925067 | orchestrator | ok 2026-01-07 00:05:38.936260 | 2026-01-07 00:05:38.936411 | TASK [Fetch manager ssh hostkey] 2026-01-07 00:05:40.524788 | orchestrator | Output suppressed because no_log was given 2026-01-07 00:05:40.537130 | 2026-01-07 00:05:40.537291 | TASK [Get ssh keypair from terraform environment] 2026-01-07 00:05:41.083358 | orchestrator | ok: Runtime: 0:00:00.010880 2026-01-07 00:05:41.098065 | 2026-01-07 00:05:41.098246 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:05:41.147897 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-07 00:05:41.158651 | 2026-01-07 00:05:41.158828 | TASK [Run manager part 0] 2026-01-07 00:05:42.324385 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:05:42.382081 | orchestrator | 2026-01-07 00:05:42.382203 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-07 00:05:42.382213 | orchestrator | 2026-01-07 00:05:42.382229 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-07 00:05:44.777234 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:44.777300 | orchestrator | 2026-01-07 00:05:44.777325 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:05:44.777334 | orchestrator | 2026-01-07 00:05:44.777343 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:05:46.823417 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:46.823630 | orchestrator | 2026-01-07 00:05:46.823643 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:05:47.603876 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:47.603934 | orchestrator | 2026-01-07 00:05:47.603945 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:05:47.660054 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.660148 | orchestrator | 2026-01-07 00:05:47.660184 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-07 00:05:47.705775 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.705847 | orchestrator | 2026-01-07 00:05:47.705856 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:05:47.745274 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.745345 | orchestrator | 2026-01-07 00:05:47.745352 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:05:47.789488 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.789561 | orchestrator | 2026-01-07 00:05:47.789568 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:05:47.834010 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.834115 | orchestrator | 2026-01-07 00:05:47.834124 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-07 00:05:47.880102 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.880213 | orchestrator | 2026-01-07 00:05:47.880226 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-07 00:05:47.916855 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:47.916961 | orchestrator | 2026-01-07 00:05:47.916975 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-07 00:05:48.670631 | orchestrator | changed: [testbed-manager] 2026-01-07 00:05:48.670704 | orchestrator | 2026-01-07 00:05:48.670714 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-07 00:08:49.503748 | orchestrator | changed: [testbed-manager] 2026-01-07 00:08:49.503813 | orchestrator | 2026-01-07 00:08:49.503826 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:10:16.133161 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:16.133247 | orchestrator | 2026-01-07 00:10:16.133264 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:10:40.337521 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:40.337634 | orchestrator | 2026-01-07 00:10:40.337654 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:10:50.769070 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:50.769179 | orchestrator | 2026-01-07 00:10:50.769197 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:10:50.816182 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:50.816286 | orchestrator | 2026-01-07 00:10:50.816307 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-07 00:10:51.640263 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:51.640369 | orchestrator | 2026-01-07 00:10:51.640388 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-07 00:10:52.399217 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:52.399316 | orchestrator | 2026-01-07 00:10:52.399333 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-07 00:10:58.938418 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:58.938507 | orchestrator | 2026-01-07 00:10:58.938541 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-07 00:11:05.295819 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:05.295976 | orchestrator | 2026-01-07 00:11:05.296000 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-07 00:11:08.155863 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:08.155979 | orchestrator | 2026-01-07 00:11:08.155990 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-07 00:11:10.063839 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:10.063993 | orchestrator | 2026-01-07 00:11:10.064019 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-07 00:11:11.220966 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:11:11.221084 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:11:11.221099 | orchestrator | 2026-01-07 00:11:11.221112 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-07 00:11:11.261154 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:11:11.261219 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:11:11.261225 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:11:11.261230 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:11:14.651853 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:11:14.651959 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:11:14.651972 | orchestrator | 2026-01-07 00:11:14.651983 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-07 00:11:15.243219 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:15.243270 | orchestrator | 2026-01-07 00:11:15.243278 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-07 00:13:37.688061 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-07 00:13:37.688183 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-07 00:13:37.688201 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-07 00:13:37.688214 | orchestrator | 2026-01-07 00:13:37.688226 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-07 00:13:40.157497 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-07 00:13:40.157612 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-07 00:13:40.157637 | orchestrator | 2026-01-07 00:13:40.157658 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-07 00:13:40.157702 | orchestrator | 2026-01-07 00:13:40.157723 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:13:41.579426 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:41.579465 | orchestrator | 2026-01-07 00:13:41.579472 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:13:41.634819 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:41.634977 | orchestrator | 2026-01-07 00:13:41.634992 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:13:41.712741 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:41.712831 | orchestrator | 2026-01-07 00:13:41.712847 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:13:42.548153 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:42.548251 | orchestrator | 2026-01-07 00:13:42.548269 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:13:43.330581 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:43.330736 | orchestrator | 2026-01-07 00:13:43.330754 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:13:44.732626 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-07 00:13:44.732759 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-07 00:13:44.732775 | orchestrator | 2026-01-07 00:13:44.732810 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:13:46.220485 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:46.220698 | orchestrator | 2026-01-07 00:13:46.220719 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:13:48.069754 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:13:48.070184 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-07 00:13:48.070210 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:13:48.070224 | orchestrator | 2026-01-07 00:13:48.070239 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:13:48.140645 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:48.140824 | orchestrator | 2026-01-07 00:13:48.140850 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:13:48.229306 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:48.229379 | orchestrator | 2026-01-07 00:13:48.229387 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:13:48.796507 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:48.796608 | orchestrator | 2026-01-07 00:13:48.796625 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:13:48.872837 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:48.872903 | orchestrator | 2026-01-07 00:13:48.872909 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:13:49.803551 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:13:49.803607 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:49.803618 | orchestrator | 2026-01-07 00:13:49.803625 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:13:49.842278 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:49.842322 | orchestrator | 2026-01-07 00:13:49.842328 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:13:49.876391 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:49.876435 | orchestrator | 2026-01-07 00:13:49.876441 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:13:49.917597 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:49.917697 | orchestrator | 2026-01-07 00:13:49.917709 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:13:49.989942 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:49.989990 | orchestrator | 2026-01-07 00:13:49.989998 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:13:50.746879 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:50.746977 | orchestrator | 2026-01-07 00:13:50.746994 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:13:50.747007 | orchestrator | 2026-01-07 00:13:50.747018 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:13:52.198708 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:52.198814 | orchestrator | 2026-01-07 00:13:52.198832 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-07 00:13:53.173387 | orchestrator | changed: [testbed-manager] 2026-01-07 00:13:53.173491 | orchestrator | 2026-01-07 00:13:53.173506 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:13:53.173520 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-07 00:13:53.173532 | orchestrator | 2026-01-07 00:13:53.533401 | orchestrator | ok: Runtime: 0:08:11.783824 2026-01-07 00:13:53.553657 | 2026-01-07 00:13:53.553844 | TASK [Point out that the log in on the manager is now possible] 2026-01-07 00:13:53.600505 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-07 00:13:53.610891 | 2026-01-07 00:13:53.611058 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:13:53.645712 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-07 00:13:53.657900 | 2026-01-07 00:13:53.658171 | TASK [Run manager part 1 + 2] 2026-01-07 00:13:54.547852 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:13:54.606309 | orchestrator | 2026-01-07 00:13:54.606400 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-07 00:13:54.606421 | orchestrator | 2026-01-07 00:13:54.606453 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:13:57.666859 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:57.666918 | orchestrator | 2026-01-07 00:13:57.666944 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:13:57.692874 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:57.692903 | orchestrator | 2026-01-07 00:13:57.692911 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:13:57.725815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:57.725862 | orchestrator | 2026-01-07 00:13:57.725873 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:13:57.766529 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:57.766599 | orchestrator | 2026-01-07 00:13:57.766619 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:13:57.845213 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:57.845257 | orchestrator | 2026-01-07 00:13:57.845267 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:13:57.932410 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:57.932516 | orchestrator | 2026-01-07 00:13:57.932535 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:13:57.980411 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-07 00:13:57.980478 | orchestrator | 2026-01-07 00:13:57.980493 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:13:58.766521 | orchestrator | ok: [testbed-manager] 2026-01-07 00:13:58.766591 | orchestrator | 2026-01-07 00:13:58.766610 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:13:58.827642 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:13:58.827710 | orchestrator | 2026-01-07 00:13:58.827718 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:14:00.282615 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:00.282711 | orchestrator | 2026-01-07 00:14:00.282728 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:14:00.853938 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:00.853979 | orchestrator | 2026-01-07 00:14:00.853985 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:14:01.937030 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:01.937086 | orchestrator | 2026-01-07 00:14:01.937101 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:14:17.521107 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:17.521184 | orchestrator | 2026-01-07 00:14:17.521200 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:14:18.220120 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:18.220167 | orchestrator | 2026-01-07 00:14:18.220177 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:14:18.275206 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:18.275244 | orchestrator | 2026-01-07 00:14:18.275251 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-07 00:14:19.302421 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:19.302471 | orchestrator | 2026-01-07 00:14:19.302482 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-07 00:14:20.279945 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:20.280020 | orchestrator | 2026-01-07 00:14:20.280034 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-07 00:14:20.891448 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:20.891488 | orchestrator | 2026-01-07 00:14:20.891495 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-07 00:14:20.936397 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:14:20.936495 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:14:20.936506 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:14:20.936514 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:14:22.949398 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:22.949507 | orchestrator | 2026-01-07 00:14:22.949525 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-07 00:14:32.070109 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-07 00:14:32.070204 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-07 00:14:32.070219 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-07 00:14:32.070230 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-07 00:14:32.070247 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-07 00:14:32.070257 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-07 00:14:32.070267 | orchestrator | 2026-01-07 00:14:32.070278 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-07 00:14:33.146851 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:33.146902 | orchestrator | 2026-01-07 00:14:33.146909 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-07 00:14:33.184794 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:33.184841 | orchestrator | 2026-01-07 00:14:33.184847 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-07 00:14:36.424963 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:36.425011 | orchestrator | 2026-01-07 00:14:36.425017 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-07 00:14:36.464744 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:36.464788 | orchestrator | 2026-01-07 00:14:36.464793 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-07 00:16:16.341203 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:16.341335 | orchestrator | 2026-01-07 00:16:16.341353 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:16:17.541434 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:17.541478 | orchestrator | 2026-01-07 00:16:17.541484 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:16:17.541490 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-07 00:16:17.541495 | orchestrator | 2026-01-07 00:16:17.802003 | orchestrator | ok: Runtime: 0:02:23.672295 2026-01-07 00:16:17.822527 | 2026-01-07 00:16:17.822755 | TASK [Reboot manager] 2026-01-07 00:16:19.360998 | orchestrator | ok: Runtime: 0:00:00.989311 2026-01-07 00:16:19.369545 | 2026-01-07 00:16:19.369705 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:16:35.862702 | orchestrator | ok 2026-01-07 00:16:35.873395 | 2026-01-07 00:16:35.873541 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:17:35.920281 | orchestrator | ok 2026-01-07 00:17:35.928000 | 2026-01-07 00:17:35.928145 | TASK [Deploy manager + bootstrap nodes] 2026-01-07 00:17:38.588587 | orchestrator | 2026-01-07 00:17:38.588854 | orchestrator | # DEPLOY MANAGER 2026-01-07 00:17:38.588881 | orchestrator | 2026-01-07 00:17:38.588896 | orchestrator | + set -e 2026-01-07 00:17:38.588910 | orchestrator | + echo 2026-01-07 00:17:38.588925 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-07 00:17:38.588942 | orchestrator | + echo 2026-01-07 00:17:38.588997 | orchestrator | + cat /opt/manager-vars.sh 2026-01-07 00:17:38.591014 | orchestrator | export NUMBER_OF_NODES=6 2026-01-07 00:17:38.591046 | orchestrator | 2026-01-07 00:17:38.591058 | orchestrator | export CEPH_VERSION=reef 2026-01-07 00:17:38.591071 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-07 00:17:38.591083 | orchestrator | export MANAGER_VERSION=latest 2026-01-07 00:17:38.591107 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-07 00:17:38.591118 | orchestrator | 2026-01-07 00:17:38.591137 | orchestrator | export ARA=false 2026-01-07 00:17:38.591149 | orchestrator | export DEPLOY_MODE=manager 2026-01-07 00:17:38.591166 | orchestrator | export TEMPEST=true 2026-01-07 00:17:38.591178 | orchestrator | export IS_ZUUL=true 2026-01-07 00:17:38.591189 | orchestrator | 2026-01-07 00:17:38.591208 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:17:38.591220 | orchestrator | export EXTERNAL_API=false 2026-01-07 00:17:38.591231 | orchestrator | 2026-01-07 00:17:38.591242 | orchestrator | export IMAGE_USER=ubuntu 2026-01-07 00:17:38.591258 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-07 00:17:38.591269 | orchestrator | 2026-01-07 00:17:38.591280 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-07 00:17:38.591290 | orchestrator | 2026-01-07 00:17:38.591301 | orchestrator | + echo 2026-01-07 00:17:38.591314 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:17:38.591568 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:17:38.591585 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:17:38.591597 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:17:38.591608 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:17:38.591650 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:17:38.591661 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:17:38.591672 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:17:38.591822 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:17:38.591838 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:17:38.591849 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:17:38.591861 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:17:38.591871 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:17:38.591882 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:17:38.591893 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:17:38.591913 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:17:38.591925 | orchestrator | ++ export ARA=false 2026-01-07 00:17:38.591935 | orchestrator | ++ ARA=false 2026-01-07 00:17:38.591946 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:17:38.591957 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:17:38.591968 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:17:38.591978 | orchestrator | ++ TEMPEST=true 2026-01-07 00:17:38.591989 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:17:38.592000 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:17:38.592011 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:17:38.592022 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:17:38.592033 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:17:38.592044 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:17:38.592055 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:17:38.592066 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:17:38.592077 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:17:38.592092 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:17:38.592104 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:17:38.592115 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:17:38.592126 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-07 00:17:38.652986 | orchestrator | + docker version 2026-01-07 00:17:38.909468 | orchestrator | Client: Docker Engine - Community 2026-01-07 00:17:38.909602 | orchestrator | Version: 27.5.1 2026-01-07 00:17:38.909671 | orchestrator | API version: 1.47 2026-01-07 00:17:38.909686 | orchestrator | Go version: go1.22.11 2026-01-07 00:17:38.909698 | orchestrator | Git commit: 9f9e405 2026-01-07 00:17:38.909709 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:17:38.909722 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:17:38.909733 | orchestrator | Context: default 2026-01-07 00:17:38.909744 | orchestrator | 2026-01-07 00:17:38.909756 | orchestrator | Server: Docker Engine - Community 2026-01-07 00:17:38.909768 | orchestrator | Engine: 2026-01-07 00:17:38.909779 | orchestrator | Version: 27.5.1 2026-01-07 00:17:38.909791 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-07 00:17:38.909846 | orchestrator | Go version: go1.22.11 2026-01-07 00:17:38.909858 | orchestrator | Git commit: 4c9b3b0 2026-01-07 00:17:38.909869 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:17:38.909880 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:17:38.909891 | orchestrator | Experimental: false 2026-01-07 00:17:38.909902 | orchestrator | containerd: 2026-01-07 00:17:38.909913 | orchestrator | Version: v2.2.1 2026-01-07 00:17:38.909942 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-07 00:17:38.909955 | orchestrator | runc: 2026-01-07 00:17:38.909966 | orchestrator | Version: 1.3.4 2026-01-07 00:17:38.909977 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-07 00:17:38.909988 | orchestrator | docker-init: 2026-01-07 00:17:38.910156 | orchestrator | Version: 0.19.0 2026-01-07 00:17:38.910175 | orchestrator | GitCommit: de40ad0 2026-01-07 00:17:38.913726 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-07 00:17:38.923115 | orchestrator | + set -e 2026-01-07 00:17:38.923143 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:17:38.923154 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:17:38.923166 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:17:38.923177 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:17:38.923188 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:17:38.923198 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:17:38.923209 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:17:38.923220 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:17:38.923231 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:17:38.923242 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:17:38.923253 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:17:38.923263 | orchestrator | ++ export ARA=false 2026-01-07 00:17:38.923274 | orchestrator | ++ ARA=false 2026-01-07 00:17:38.923285 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:17:38.923296 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:17:38.923307 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:17:38.923318 | orchestrator | ++ TEMPEST=true 2026-01-07 00:17:38.923329 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:17:38.923340 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:17:38.923350 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:17:38.923361 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:17:38.923372 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:17:38.923383 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:17:38.923393 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:17:38.923404 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:17:38.923415 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:17:38.923425 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:17:38.923436 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:17:38.923447 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:17:38.923458 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:17:38.923469 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:17:38.923480 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:17:38.923490 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:17:38.923505 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:17:38.923522 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:17:38.923533 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:17:38.923544 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-07 00:17:38.929157 | orchestrator | + set -e 2026-01-07 00:17:38.929188 | orchestrator | + VERSION=reef 2026-01-07 00:17:38.929683 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:17:38.935904 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-07 00:17:38.936000 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:17:38.940463 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-07 00:17:38.946268 | orchestrator | + set -e 2026-01-07 00:17:38.946333 | orchestrator | + VERSION=2024.2 2026-01-07 00:17:38.946659 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:17:38.951014 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-07 00:17:38.951056 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:17:38.956214 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-07 00:17:38.957543 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:17:39.019489 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:17:39.019608 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:17:39.019675 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-07 00:17:39.020706 | orchestrator | ++ semver latest 10.0.0-0 2026-01-07 00:17:39.084477 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:17:39.085087 | orchestrator | ++ semver 2024.2 2025.1 2026-01-07 00:17:39.139928 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:17:39.140026 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-07 00:17:39.237646 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:17:39.239131 | orchestrator | + source /opt/venv/bin/activate 2026-01-07 00:17:39.240411 | orchestrator | ++ deactivate nondestructive 2026-01-07 00:17:39.240430 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:17:39.240445 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:17:39.240465 | orchestrator | ++ hash -r 2026-01-07 00:17:39.240485 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:17:39.240512 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-07 00:17:39.240532 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-07 00:17:39.240554 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-07 00:17:39.240664 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-07 00:17:39.240682 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-07 00:17:39.240694 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-07 00:17:39.240705 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-07 00:17:39.240717 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:17:39.240802 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:17:39.240818 | orchestrator | ++ export PATH 2026-01-07 00:17:39.240833 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:17:39.241228 | orchestrator | ++ '[' -z '' ']' 2026-01-07 00:17:39.241245 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-07 00:17:39.241256 | orchestrator | ++ PS1='(venv) ' 2026-01-07 00:17:39.241267 | orchestrator | ++ export PS1 2026-01-07 00:17:39.241278 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-07 00:17:39.241290 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-07 00:17:39.241302 | orchestrator | ++ hash -r 2026-01-07 00:17:39.241345 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-07 00:17:40.646826 | orchestrator | 2026-01-07 00:17:40.646978 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-07 00:17:40.647002 | orchestrator | 2026-01-07 00:17:40.647023 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:17:41.235147 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:41.235277 | orchestrator | 2026-01-07 00:17:41.235295 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:17:42.222201 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:42.222354 | orchestrator | 2026-01-07 00:17:42.222372 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-07 00:17:42.222385 | orchestrator | 2026-01-07 00:17:42.222396 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:17:44.679337 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:44.679462 | orchestrator | 2026-01-07 00:17:44.679479 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-07 00:17:44.731815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:44.731931 | orchestrator | 2026-01-07 00:17:44.731950 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-07 00:17:45.182125 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:45.182257 | orchestrator | 2026-01-07 00:17:45.182283 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-07 00:17:45.231005 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:45.231133 | orchestrator | 2026-01-07 00:17:45.231150 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:17:45.594008 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:45.594254 | orchestrator | 2026-01-07 00:17:45.594270 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-07 00:17:45.649436 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:45.649591 | orchestrator | 2026-01-07 00:17:45.649607 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-07 00:17:46.002410 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:46.002528 | orchestrator | 2026-01-07 00:17:46.002545 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-07 00:17:46.127747 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:46.127848 | orchestrator | 2026-01-07 00:17:46.127864 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-07 00:17:46.127877 | orchestrator | 2026-01-07 00:17:46.127888 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:17:47.903337 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:47.903451 | orchestrator | 2026-01-07 00:17:47.903468 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-07 00:17:48.005683 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-07 00:17:48.005792 | orchestrator | 2026-01-07 00:17:48.005809 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-07 00:17:48.065363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-07 00:17:48.065466 | orchestrator | 2026-01-07 00:17:48.065482 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-07 00:17:49.160381 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-07 00:17:49.160468 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-07 00:17:49.160478 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-07 00:17:49.160486 | orchestrator | 2026-01-07 00:17:49.160494 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-07 00:17:51.030272 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-07 00:17:51.030411 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-07 00:17:51.030432 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-07 00:17:51.030445 | orchestrator | 2026-01-07 00:17:51.030458 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-07 00:17:51.694896 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:17:51.695028 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:51.695043 | orchestrator | 2026-01-07 00:17:51.695054 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-07 00:17:52.371850 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:17:52.371961 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:52.371978 | orchestrator | 2026-01-07 00:17:52.371991 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-07 00:17:52.429178 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:52.429274 | orchestrator | 2026-01-07 00:17:52.429290 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-07 00:17:52.789882 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:52.790008 | orchestrator | 2026-01-07 00:17:52.790072 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-07 00:17:52.860545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-07 00:17:52.860703 | orchestrator | 2026-01-07 00:17:52.860731 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-07 00:17:54.019960 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:54.020067 | orchestrator | 2026-01-07 00:17:54.020089 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-07 00:17:54.895423 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:54.895553 | orchestrator | 2026-01-07 00:17:54.895570 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-07 00:18:09.518557 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:09.518733 | orchestrator | 2026-01-07 00:18:09.518753 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-07 00:18:09.574527 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:09.574665 | orchestrator | 2026-01-07 00:18:09.574681 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-07 00:18:09.574694 | orchestrator | 2026-01-07 00:18:09.574738 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:18:11.393248 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:11.393355 | orchestrator | 2026-01-07 00:18:11.393372 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-07 00:18:11.539430 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-07 00:18:11.539533 | orchestrator | 2026-01-07 00:18:11.539549 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-07 00:18:11.595961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:18:11.596064 | orchestrator | 2026-01-07 00:18:11.596080 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-07 00:18:14.381356 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:14.381471 | orchestrator | 2026-01-07 00:18:14.381488 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-07 00:18:14.430777 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:14.430909 | orchestrator | 2026-01-07 00:18:14.430933 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-07 00:18:14.578648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-07 00:18:14.578750 | orchestrator | 2026-01-07 00:18:14.578765 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-07 00:18:17.521615 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-07 00:18:17.521743 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-07 00:18:17.521753 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-07 00:18:17.521762 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-07 00:18:17.521770 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-07 00:18:17.521778 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-07 00:18:17.521785 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-07 00:18:17.521792 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-07 00:18:17.521799 | orchestrator | 2026-01-07 00:18:17.521807 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-07 00:18:18.180822 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:18.180943 | orchestrator | 2026-01-07 00:18:18.180959 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-07 00:18:18.839555 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:18.839754 | orchestrator | 2026-01-07 00:18:18.839786 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-07 00:18:18.928323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-07 00:18:18.928425 | orchestrator | 2026-01-07 00:18:18.928441 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-07 00:18:20.186599 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-07 00:18:20.186746 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-07 00:18:20.186761 | orchestrator | 2026-01-07 00:18:20.186772 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-07 00:18:20.829678 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:20.829800 | orchestrator | 2026-01-07 00:18:20.829827 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-07 00:18:20.894474 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:20.894556 | orchestrator | 2026-01-07 00:18:20.894570 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-07 00:18:20.970196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-07 00:18:20.970285 | orchestrator | 2026-01-07 00:18:20.970298 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-07 00:18:21.633976 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:21.634162 | orchestrator | 2026-01-07 00:18:21.634211 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-07 00:18:21.703327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-07 00:18:21.703422 | orchestrator | 2026-01-07 00:18:21.703446 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-07 00:18:23.090296 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:23.090381 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:23.090390 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:23.090397 | orchestrator | 2026-01-07 00:18:23.090403 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-07 00:18:23.752853 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:23.752951 | orchestrator | 2026-01-07 00:18:23.752964 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-07 00:18:23.806521 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:23.806687 | orchestrator | 2026-01-07 00:18:23.806711 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-07 00:18:23.902127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-07 00:18:23.902266 | orchestrator | 2026-01-07 00:18:23.902321 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-07 00:18:24.458349 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:24.458491 | orchestrator | 2026-01-07 00:18:24.458507 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-07 00:18:24.890345 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:24.890430 | orchestrator | 2026-01-07 00:18:24.890441 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-07 00:18:26.163847 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-07 00:18:26.163962 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-07 00:18:26.163977 | orchestrator | 2026-01-07 00:18:26.163990 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-07 00:18:26.838125 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:26.838232 | orchestrator | 2026-01-07 00:18:26.838249 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-07 00:18:27.224932 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:27.225015 | orchestrator | 2026-01-07 00:18:27.225025 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-07 00:18:27.617375 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:27.617448 | orchestrator | 2026-01-07 00:18:27.617454 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-07 00:18:27.676019 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:27.676116 | orchestrator | 2026-01-07 00:18:27.676139 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-07 00:18:27.755592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-07 00:18:27.755746 | orchestrator | 2026-01-07 00:18:27.755774 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-07 00:18:27.800379 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:27.800474 | orchestrator | 2026-01-07 00:18:27.800488 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-07 00:18:29.899143 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-07 00:18:29.899249 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-07 00:18:29.899264 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-07 00:18:29.899285 | orchestrator | 2026-01-07 00:18:29.899395 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-07 00:18:30.618297 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:30.618371 | orchestrator | 2026-01-07 00:18:30.618386 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-07 00:18:31.345815 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:31.345924 | orchestrator | 2026-01-07 00:18:31.345942 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-07 00:18:32.077289 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:32.077401 | orchestrator | 2026-01-07 00:18:32.077421 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-07 00:18:32.148353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-07 00:18:32.148459 | orchestrator | 2026-01-07 00:18:32.148474 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-07 00:18:32.200795 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:32.200898 | orchestrator | 2026-01-07 00:18:32.200913 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-07 00:18:32.917413 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-07 00:18:32.917516 | orchestrator | 2026-01-07 00:18:32.917531 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-07 00:18:33.009066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-07 00:18:33.009163 | orchestrator | 2026-01-07 00:18:33.009179 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-07 00:18:33.742432 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:33.742568 | orchestrator | 2026-01-07 00:18:33.742596 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-07 00:18:34.390392 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:34.390448 | orchestrator | 2026-01-07 00:18:34.390456 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-07 00:18:34.454984 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:34.455080 | orchestrator | 2026-01-07 00:18:34.455095 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-07 00:18:34.512033 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:34.512098 | orchestrator | 2026-01-07 00:18:34.512111 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-07 00:18:35.408294 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:35.408435 | orchestrator | 2026-01-07 00:18:35.408453 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-07 00:19:49.189153 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:49.189287 | orchestrator | 2026-01-07 00:19:49.189308 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-07 00:19:50.238739 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:50.238832 | orchestrator | 2026-01-07 00:19:50.238839 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-07 00:19:50.303952 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:50.304064 | orchestrator | 2026-01-07 00:19:50.304079 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-07 00:19:52.832351 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:52.832456 | orchestrator | 2026-01-07 00:19:52.832496 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-07 00:19:52.887410 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:52.887521 | orchestrator | 2026-01-07 00:19:52.887536 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:19:52.887550 | orchestrator | 2026-01-07 00:19:52.887561 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-07 00:19:52.942484 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:52.942597 | orchestrator | 2026-01-07 00:19:52.942619 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-07 00:20:52.996121 | orchestrator | Pausing for 60 seconds 2026-01-07 00:20:52.996245 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:52.996263 | orchestrator | 2026-01-07 00:20:52.996277 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-07 00:20:56.107585 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:56.107755 | orchestrator | 2026-01-07 00:20:56.107774 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-07 00:21:58.185538 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-07 00:21:58.185768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-07 00:21:58.185800 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-07 00:21:58.185813 | orchestrator | changed: [testbed-manager] 2026-01-07 00:21:58.185826 | orchestrator | 2026-01-07 00:21:58.185838 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-07 00:22:09.305091 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:09.305211 | orchestrator | 2026-01-07 00:22:09.305230 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-07 00:22:09.400877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-07 00:22:09.400977 | orchestrator | 2026-01-07 00:22:09.400993 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:22:09.401007 | orchestrator | 2026-01-07 00:22:09.401018 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-07 00:22:09.455507 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:09.455596 | orchestrator | 2026-01-07 00:22:09.455611 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-07 00:22:09.533184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-07 00:22:09.533298 | orchestrator | 2026-01-07 00:22:09.533323 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-07 00:22:10.396789 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:10.396901 | orchestrator | 2026-01-07 00:22:10.396911 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-07 00:22:13.774110 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:13.774221 | orchestrator | 2026-01-07 00:22:13.774236 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-07 00:22:13.841195 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:22:13.841282 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-07 00:22:13.841297 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-07 00:22:13.841307 | orchestrator | "Checking running containers against expected versions...", 2026-01-07 00:22:13.841317 | orchestrator | "", 2026-01-07 00:22:13.841327 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-07 00:22:13.841336 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-07 00:22:13.841346 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841355 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-07 00:22:13.841363 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841372 | orchestrator | "", 2026-01-07 00:22:13.841381 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-07 00:22:13.841390 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-07 00:22:13.841399 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841408 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-07 00:22:13.841416 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841425 | orchestrator | "", 2026-01-07 00:22:13.841434 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-07 00:22:13.841443 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-07 00:22:13.841451 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841460 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-07 00:22:13.841469 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841478 | orchestrator | "", 2026-01-07 00:22:13.841487 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-07 00:22:13.841496 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-07 00:22:13.841505 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841513 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-07 00:22:13.841542 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841551 | orchestrator | "", 2026-01-07 00:22:13.841559 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-07 00:22:13.841568 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-07 00:22:13.841577 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841585 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-07 00:22:13.841594 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841602 | orchestrator | "", 2026-01-07 00:22:13.841611 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-07 00:22:13.841620 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.841628 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841637 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.841646 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841695 | orchestrator | "", 2026-01-07 00:22:13.841705 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-07 00:22:13.841714 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:22:13.841723 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841731 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:22:13.841741 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841751 | orchestrator | "", 2026-01-07 00:22:13.841760 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-07 00:22:13.841770 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:22:13.841781 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841796 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:22:13.841812 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841822 | orchestrator | "", 2026-01-07 00:22:13.841833 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-07 00:22:13.841843 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-07 00:22:13.841853 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841864 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-07 00:22:13.841874 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841884 | orchestrator | "", 2026-01-07 00:22:13.841893 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-07 00:22:13.841903 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:22:13.841914 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841924 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:22:13.841934 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.841944 | orchestrator | "", 2026-01-07 00:22:13.841954 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-07 00:22:13.841964 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.841974 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.841984 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.841994 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.842004 | orchestrator | "", 2026-01-07 00:22:13.842076 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-07 00:22:13.842088 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842098 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.842109 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842118 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.842127 | orchestrator | "", 2026-01-07 00:22:13.842135 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-07 00:22:13.842144 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842153 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.842161 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842170 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.842179 | orchestrator | "", 2026-01-07 00:22:13.842187 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-07 00:22:13.842203 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842212 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.842221 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842230 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.842238 | orchestrator | "", 2026-01-07 00:22:13.842247 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-07 00:22:13.842271 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842280 | orchestrator | " Enabled: true", 2026-01-07 00:22:13.842289 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:22:13.842298 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:13.842307 | orchestrator | "", 2026-01-07 00:22:13.842316 | orchestrator | "=== Summary ===", 2026-01-07 00:22:13.842325 | orchestrator | "Errors (version mismatches): 0", 2026-01-07 00:22:13.842334 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-07 00:22:13.842343 | orchestrator | "", 2026-01-07 00:22:13.842352 | orchestrator | "✅ All running containers match expected versions!" 2026-01-07 00:22:13.842361 | orchestrator | ] 2026-01-07 00:22:13.842371 | orchestrator | } 2026-01-07 00:22:13.842380 | orchestrator | 2026-01-07 00:22:13.842389 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-07 00:22:13.893420 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:13.893529 | orchestrator | 2026-01-07 00:22:13.893545 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:22:13.893559 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-07 00:22:13.893570 | orchestrator | 2026-01-07 00:22:14.009080 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:22:14.009204 | orchestrator | + deactivate 2026-01-07 00:22:14.009233 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-07 00:22:14.009254 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:22:14.009274 | orchestrator | + export PATH 2026-01-07 00:22:14.009292 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-07 00:22:14.009310 | orchestrator | + '[' -n '' ']' 2026-01-07 00:22:14.009328 | orchestrator | + hash -r 2026-01-07 00:22:14.009347 | orchestrator | + '[' -n '' ']' 2026-01-07 00:22:14.009366 | orchestrator | + unset VIRTUAL_ENV 2026-01-07 00:22:14.009384 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-07 00:22:14.009403 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-07 00:22:14.009421 | orchestrator | + unset -f deactivate 2026-01-07 00:22:14.009440 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-07 00:22:14.018460 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:22:14.018545 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:22:14.018562 | orchestrator | + local max_attempts=60 2026-01-07 00:22:14.018576 | orchestrator | + local name=ceph-ansible 2026-01-07 00:22:14.018587 | orchestrator | + local attempt_num=1 2026-01-07 00:22:14.019799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:22:14.061881 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:14.061970 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:22:14.061995 | orchestrator | + local max_attempts=60 2026-01-07 00:22:14.062085 | orchestrator | + local name=kolla-ansible 2026-01-07 00:22:14.062107 | orchestrator | + local attempt_num=1 2026-01-07 00:22:14.062598 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:22:14.103596 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:14.103737 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:22:14.103756 | orchestrator | + local max_attempts=60 2026-01-07 00:22:14.103768 | orchestrator | + local name=osism-ansible 2026-01-07 00:22:14.103779 | orchestrator | + local attempt_num=1 2026-01-07 00:22:14.104422 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:22:14.140746 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:14.140826 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:22:14.140839 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:22:14.911353 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-07 00:22:15.102453 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-07 00:22:15.102553 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-07 00:22:15.102565 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-07 00:22:15.102573 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-07 00:22:15.102585 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-07 00:22:15.102595 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-07 00:22:15.102603 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-07 00:22:15.102612 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-07 00:22:15.102642 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-07 00:22:15.102698 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-07 00:22:15.102708 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-07 00:22:15.102717 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-07 00:22:15.102725 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-07 00:22:15.102734 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-07 00:22:15.102742 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-07 00:22:15.102751 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-07 00:22:15.110271 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:22:15.163197 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:22:15.163268 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:22:15.163278 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-07 00:22:15.166399 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-07 00:22:27.457132 | orchestrator | 2026-01-07 00:22:27 | INFO  | Task 1878f5de-cc1c-421c-bc26-07597a58079e (resolvconf) was prepared for execution. 2026-01-07 00:22:27.457280 | orchestrator | 2026-01-07 00:22:27 | INFO  | It takes a moment until task 1878f5de-cc1c-421c-bc26-07597a58079e (resolvconf) has been started and output is visible here. 2026-01-07 00:22:43.139167 | orchestrator | 2026-01-07 00:22:43.139293 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-07 00:22:43.139310 | orchestrator | 2026-01-07 00:22:43.139322 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:22:43.139334 | orchestrator | Wednesday 07 January 2026 00:22:31 +0000 (0:00:00.147) 0:00:00.147 ***** 2026-01-07 00:22:43.139345 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:43.139357 | orchestrator | 2026-01-07 00:22:43.139369 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:22:43.139381 | orchestrator | Wednesday 07 January 2026 00:22:36 +0000 (0:00:04.956) 0:00:05.104 ***** 2026-01-07 00:22:43.139392 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:43.139403 | orchestrator | 2026-01-07 00:22:43.139414 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:22:43.139426 | orchestrator | Wednesday 07 January 2026 00:22:36 +0000 (0:00:00.069) 0:00:05.173 ***** 2026-01-07 00:22:43.139437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-07 00:22:43.139449 | orchestrator | 2026-01-07 00:22:43.139460 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:22:43.139471 | orchestrator | Wednesday 07 January 2026 00:22:36 +0000 (0:00:00.089) 0:00:05.263 ***** 2026-01-07 00:22:43.139483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:22:43.139494 | orchestrator | 2026-01-07 00:22:43.139505 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:22:43.139527 | orchestrator | Wednesday 07 January 2026 00:22:37 +0000 (0:00:00.087) 0:00:05.350 ***** 2026-01-07 00:22:43.139540 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:43.139551 | orchestrator | 2026-01-07 00:22:43.139562 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:22:43.139573 | orchestrator | Wednesday 07 January 2026 00:22:38 +0000 (0:00:01.180) 0:00:06.531 ***** 2026-01-07 00:22:43.139584 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:43.139595 | orchestrator | 2026-01-07 00:22:43.139606 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:22:43.139617 | orchestrator | Wednesday 07 January 2026 00:22:38 +0000 (0:00:00.070) 0:00:06.602 ***** 2026-01-07 00:22:43.139628 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:43.139639 | orchestrator | 2026-01-07 00:22:43.139679 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:22:43.139694 | orchestrator | Wednesday 07 January 2026 00:22:38 +0000 (0:00:00.543) 0:00:07.146 ***** 2026-01-07 00:22:43.139708 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:43.139720 | orchestrator | 2026-01-07 00:22:43.139733 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:22:43.139746 | orchestrator | Wednesday 07 January 2026 00:22:38 +0000 (0:00:00.085) 0:00:07.231 ***** 2026-01-07 00:22:43.139758 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:43.139771 | orchestrator | 2026-01-07 00:22:43.139783 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:22:43.139796 | orchestrator | Wednesday 07 January 2026 00:22:39 +0000 (0:00:00.565) 0:00:07.797 ***** 2026-01-07 00:22:43.139808 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:43.139820 | orchestrator | 2026-01-07 00:22:43.139830 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:22:43.139841 | orchestrator | Wednesday 07 January 2026 00:22:40 +0000 (0:00:01.125) 0:00:08.922 ***** 2026-01-07 00:22:43.139875 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:43.139886 | orchestrator | 2026-01-07 00:22:43.139897 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:22:43.139908 | orchestrator | Wednesday 07 January 2026 00:22:41 +0000 (0:00:00.990) 0:00:09.913 ***** 2026-01-07 00:22:43.139919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-07 00:22:43.139930 | orchestrator | 2026-01-07 00:22:43.139941 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:22:43.139952 | orchestrator | Wednesday 07 January 2026 00:22:41 +0000 (0:00:00.095) 0:00:10.008 ***** 2026-01-07 00:22:43.139963 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:43.139974 | orchestrator | 2026-01-07 00:22:43.139985 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:22:43.139997 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:22:43.140008 | orchestrator | 2026-01-07 00:22:43.140018 | orchestrator | 2026-01-07 00:22:43.140029 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:22:43.140040 | orchestrator | Wednesday 07 January 2026 00:22:42 +0000 (0:00:01.217) 0:00:11.226 ***** 2026-01-07 00:22:43.140051 | orchestrator | =============================================================================== 2026-01-07 00:22:43.140062 | orchestrator | Gathering Facts --------------------------------------------------------- 4.96s 2026-01-07 00:22:43.140072 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-01-07 00:22:43.140083 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.18s 2026-01-07 00:22:43.140094 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-01-07 00:22:43.140105 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-01-07 00:22:43.140116 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-01-07 00:22:43.140145 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-01-07 00:22:43.140157 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-01-07 00:22:43.140168 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-01-07 00:22:43.140178 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-07 00:22:43.140189 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-07 00:22:43.140200 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-07 00:22:43.140211 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-07 00:22:43.450337 | orchestrator | + osism apply sshconfig 2026-01-07 00:22:55.607280 | orchestrator | 2026-01-07 00:22:55 | INFO  | Task 6b7302e0-da9a-495a-aff7-60cd4791bb21 (sshconfig) was prepared for execution. 2026-01-07 00:22:55.607417 | orchestrator | 2026-01-07 00:22:55 | INFO  | It takes a moment until task 6b7302e0-da9a-495a-aff7-60cd4791bb21 (sshconfig) has been started and output is visible here. 2026-01-07 00:23:08.055994 | orchestrator | 2026-01-07 00:23:08.056117 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-07 00:23:08.056135 | orchestrator | 2026-01-07 00:23:08.056148 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-07 00:23:08.056160 | orchestrator | Wednesday 07 January 2026 00:23:00 +0000 (0:00:00.167) 0:00:00.167 ***** 2026-01-07 00:23:08.056172 | orchestrator | ok: [testbed-manager] 2026-01-07 00:23:08.056184 | orchestrator | 2026-01-07 00:23:08.056196 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-07 00:23:08.056207 | orchestrator | Wednesday 07 January 2026 00:23:00 +0000 (0:00:00.540) 0:00:00.708 ***** 2026-01-07 00:23:08.056247 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:08.056259 | orchestrator | 2026-01-07 00:23:08.056271 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-07 00:23:08.056282 | orchestrator | Wednesday 07 January 2026 00:23:01 +0000 (0:00:00.535) 0:00:01.244 ***** 2026-01-07 00:23:08.056293 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:23:08.056305 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:23:08.056316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:23:08.056327 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:23:08.056339 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:23:08.056349 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:23:08.056360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:23:08.056371 | orchestrator | 2026-01-07 00:23:08.056382 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-07 00:23:08.056393 | orchestrator | Wednesday 07 January 2026 00:23:07 +0000 (0:00:06.041) 0:00:07.285 ***** 2026-01-07 00:23:08.056404 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:08.056415 | orchestrator | 2026-01-07 00:23:08.056425 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-07 00:23:08.056436 | orchestrator | Wednesday 07 January 2026 00:23:07 +0000 (0:00:00.073) 0:00:07.359 ***** 2026-01-07 00:23:08.056447 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:08.056459 | orchestrator | 2026-01-07 00:23:08.056470 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:23:08.056482 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:23:08.056494 | orchestrator | 2026-01-07 00:23:08.056504 | orchestrator | 2026-01-07 00:23:08.056515 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:23:08.056527 | orchestrator | Wednesday 07 January 2026 00:23:07 +0000 (0:00:00.588) 0:00:07.948 ***** 2026-01-07 00:23:08.056538 | orchestrator | =============================================================================== 2026-01-07 00:23:08.056549 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.04s 2026-01-07 00:23:08.056560 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-01-07 00:23:08.056571 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-01-07 00:23:08.056582 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-01-07 00:23:08.056593 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-07 00:23:08.414336 | orchestrator | + osism apply known-hosts 2026-01-07 00:23:20.688073 | orchestrator | 2026-01-07 00:23:20 | INFO  | Task 43f5ae46-e466-4e28-9c1d-7c41bdff8210 (known-hosts) was prepared for execution. 2026-01-07 00:23:20.688225 | orchestrator | 2026-01-07 00:23:20 | INFO  | It takes a moment until task 43f5ae46-e466-4e28-9c1d-7c41bdff8210 (known-hosts) has been started and output is visible here. 2026-01-07 00:23:37.896527 | orchestrator | 2026-01-07 00:23:37.896697 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-07 00:23:37.896714 | orchestrator | 2026-01-07 00:23:37.896727 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-07 00:23:37.896740 | orchestrator | Wednesday 07 January 2026 00:23:24 +0000 (0:00:00.185) 0:00:00.185 ***** 2026-01-07 00:23:37.896752 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:23:37.896764 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:23:37.896775 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:23:37.896787 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:23:37.896821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:23:37.896833 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:23:37.896843 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:23:37.896854 | orchestrator | 2026-01-07 00:23:37.896866 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-07 00:23:37.896879 | orchestrator | Wednesday 07 January 2026 00:23:31 +0000 (0:00:06.027) 0:00:06.212 ***** 2026-01-07 00:23:37.896892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:23:37.896906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:23:37.896929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:23:37.896941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:23:37.896952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:23:37.896963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:23:37.896973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:23:37.896984 | orchestrator | 2026-01-07 00:23:37.896995 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897009 | orchestrator | Wednesday 07 January 2026 00:23:31 +0000 (0:00:00.175) 0:00:06.387 ***** 2026-01-07 00:23:37.897022 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIICVw9aJ/tKjDjnkBMvNlyvD5XbAAvRA4B0CYCTc1GPa) 2026-01-07 00:23:37.897041 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrneTDn0wy8zSGKK4RpYNxZGfjneA3lMkNGma0M4TiLH0YmtPm/fpaPT5QO6JflASCRA4Gx4xZqP49AnbV5OUsOg075BlXm1izwsIXXlzgYpRpPVGVbCMXmdQwS6w3FMJ5BMCvUiCCb56IYs3WdHBSyUgndy8+MEmZupQAFaVhJheRxhOlOiBY9iTQ8n1XDWbyEXbvfH/j41aPHUPOf7oGL1Sb6sqTdpi7m9SfIzLsS/EFo/ZIbqhma0V2HCFfk6JZ/AHdZlcOIcXyumOkdxqn0yUkE/mBjA8evVOuyOouNPJCvX2ubezeatXQlaLNtc6/E3S2mzP2g4swf3HqbrSV5G/BHkyrI6RP2qPZuG6qVE3QbMN/FP0wfshC09JDphjlahjlKAlQyKVR8XVBmdAz0NcRGB2xjqOoWDjpvYfET6+8nxzvl2tlJi73SRiROnkZnigZllAAluFzHHCLdwTp7Amw4H2LpixupNVv8meGbb/FP0srjVtSEbw6G0Wl2aM=) 2026-01-07 00:23:37.897063 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN03nnE8LXo9sFfmH1QvxhUUjklgoNzys51HGoeXPLb5f4KBBw9cMIgpNoeE0+XDN3OBTyLOaVfOCDqRdp90O2Y=) 2026-01-07 00:23:37.897079 | orchestrator | 2026-01-07 00:23:37.897092 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897105 | orchestrator | Wednesday 07 January 2026 00:23:32 +0000 (0:00:01.217) 0:00:07.605 ***** 2026-01-07 00:23:37.897136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE8IaOhyiHRs6TqSDW2w1VGSGOZiAjGSZwJAErvT2MwaRRnSRvL6R/JadnVzyZZfF34yhu3BotHdpUKfHj2KlzpTVZGUAgKRqIPL6cnNtA0t1PstwM3CPi8Bgc708k5xN1RUqUkSaMuEVaJBb6CgF2ffuZ0VJtiX9cSUw1wIKDFbhtx8HpYIQfQtsRQXrj8fcLAd8FV62/GNJJ/kw/y7u2SaLJ7zJIfvHoT/9feK0aKGDM8UB54vMJANPY+hcT81oe4NMzgPctb903gQAeT1kb4lQo3xxD9An4MHIwTscy4qMH4h6cYMhL9Wvj1czrna314WSh9e2nPbfeg1q04B92F6H2vty3DHRgTU6tVsFmf/a45da6MEXJAHBGtBuTX2y/fPoSFpef/WKQceC4uEuq9395OJ+8oMK0PhPUT59Ued/fxSRrFSk/ZgSO96McJ3uUyhWENsbI0IMJ7LmJo7h4GvgVMw01KE9oNLarhGEwsx3VKig3x/F7lwS/prSnZk0=) 2026-01-07 00:23:37.897160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVvv9TvtF46RvNVGjLkKgJeyn9SINirv8pCPUNe6Ufcu2Wzd5M/DJKP9i5HYFFT1qN9WbTGt4zExOjPzGTgfLU=) 2026-01-07 00:23:37.897174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFtPjWFML+tS2StpLYDJ9yuKrYCxbC5EpHCyu3WVm+U) 2026-01-07 00:23:37.897187 | orchestrator | 2026-01-07 00:23:37.897199 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897212 | orchestrator | Wednesday 07 January 2026 00:23:33 +0000 (0:00:01.085) 0:00:08.690 ***** 2026-01-07 00:23:37.897224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsc/kZ0srNborj6e43LTGDyjpsK9hVo5oRMqPybIvZN) 2026-01-07 00:23:37.897238 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbG018ZyDV2ARD7MdEcILyHJmfkxRuz3an3Ne8jOrTp8g0LkpCOzfxBJUPDCtXSEEMCR3ZAJ9sc/ZHGeSGbtCfPWhAB1l2QsMmLEE5iXR1d4RZXWNQvB+C8uu2LDgYw0dq0U7O7hOnByZdXP4oXW1gz2m0N7ZIxdriU4PWNunlFet++/496rq0Kiv+MNUimHTRtY8OU256CckBpagcqLzCRM4HkzlkvGnqeAaaDNRmhwLKZSzayNn2WOnLHnyugXIMy4/KYel/yIKeWt/rIPDdV7IQmWa8rxcGjcbjJ+hCIAM2WyLHHk3hYbun0ePbawEGyAkIEX1p091xE0ltxZ18OSxL1g8wOo4NseRen3u8T8XEmGfipWioCQmFmj6bL1ON//lLrpcM1uiAakuykG5OqdtGsm3iMVN97pOofyLPVsdkCX2zXLrHkGDDEh6osifB62C9+ybFZF2t+cnxekeqMB5iM6jS+S1yIg8usm/Mima3NYuzuHFdLtcGKENvN60=) 2026-01-07 00:23:37.897251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOmjh+yyXtqlpzxZiKck6MLd8RqC0SNua0RuJ0XyxcRUyQLuLf9PH9hZpqneIs4WfIIaVtGdZb4/aWfGVt4wMYE=) 2026-01-07 00:23:37.897263 | orchestrator | 2026-01-07 00:23:37.897276 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897288 | orchestrator | Wednesday 07 January 2026 00:23:34 +0000 (0:00:01.055) 0:00:09.746 ***** 2026-01-07 00:23:37.897377 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKP+TUNVhgNBfkfiS/GZwiCvQYEoYzU/Ag0aP1RJ/Vl2LQA4kVVTc4+Q/Dkg88trArDaag8sJ6MunfBRiOfH3U=) 2026-01-07 00:23:37.897392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+hQe5/cGKbuywhd0RHYhvvVbJ9XJ6q4QcGNyWWjjL4) 2026-01-07 00:23:37.897412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU2PSF+imLyIoliUIoBFXJlLAo4cOYyzR8TWSL0MOGZ3ZFpAoC4jRG74XinaIrcfAFaH/Az6VmUCk0LPtP3/k0jodlMpAExDowl6K1ntjziTHEPhYFHezg+5cSa4i6lflL0IxjcuBBv+2msxjRlVl7pPlzNCpQtUhpjNwBjel93arg3+rAGXWaoTl6CSbUycQ8mvfLHIT5u0C0GZgBw8mpoBXIobMMz9AUTAs2W+I2SlqY2Ot5QZbyGXoXEGUio2iprcF6xM8t4MoH373AzMToGR3zsIV9NNgphQSkjul8psLck6QnbWI8OMW/lwp3q0neYPfPzs3MJ8C0hYON/LBuBBCofz2d+djPPgAuvFh9R9QJcM2kEs/wBKL7UI6h4kgcCcdq+eDEgz581EzrHLGSXO/Zi3pvzxKDNquTYFzXPFlVxpwZyZ653LnZoyghYzqlIF8PrfGs4pHOaGv5mh7U+aFRMGzX2ipG2HNYQ5+7JNJq6Q7kNY89vPo0bIHcTec=) 2026-01-07 00:23:37.897454 | orchestrator | 2026-01-07 00:23:37.897472 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897490 | orchestrator | Wednesday 07 January 2026 00:23:35 +0000 (0:00:01.103) 0:00:10.849 ***** 2026-01-07 00:23:37.897508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgpoLb33tuKRwdJEprqujIFRw6qYwNPyDE3bIASUAZwLm7Bah8cnkGwGRQe4vXEHugTc4aoQq1ydiM9XZMyTwIxoWAosU8315DS3j+UYhgHJjIOaMPcVsjqZfQMLZKzYkwMcHMj1ckGPwv/EQA5bPL4CW8MF9wmsRHJ5ABm9Vlc4VG8X0hNg6Z3yZMk5K1yuuRqEsAcyJ/WKDC+v8EK75xElf+2Bzj0FYgFrqexOnlyEbpgoJ/Nhas8PM6gUgi318n3QnX6a8FhBsM4/pMc5b45lui+j4/tw7V/lHH3BNNAK6LWh+R097+iOBxnsSPwJM2qXDyTjnqEaTEpO4hbVvC6jdWAW3UOfHIapIkCjIMuZNLoXogQZA9gLM1lOEj+pa/R6ZMkJ0svMRAzDkMgyNMSs68y+AgmZaZFd67yBeBFqgXcfLNqsZ0inWAuz4eQ6YT/Z7cYwYl7/6Bi81AL7jcU1oJCo4Vreiucb7OSivIN24K7aK81bnoj1vPCK/t8Ts=) 2026-01-07 00:23:37.897542 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAa7V5hm+ljZMDsNviCOj+jmtR1jJXa9wUai03O+dWpg1AW/EywAZESLtnWwOcDNNE1KUXjtKDh9P6mWHXh1ES0=) 2026-01-07 00:23:37.897562 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILygH9BcJFvwCh/uiOxcjYo9NnrqWH7DA1okPro/j3KG) 2026-01-07 00:23:37.897581 | orchestrator | 2026-01-07 00:23:37.897599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.897618 | orchestrator | Wednesday 07 January 2026 00:23:36 +0000 (0:00:01.126) 0:00:11.975 ***** 2026-01-07 00:23:37.897640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP/IN5d5dgw/EcRsg9kwIuwdfLxr7o+zZKGHk+cP2vABQQgr42TabkPWMW1iWCk6+ZRuQepPrEmaCAdQm5xJ6rg=) 2026-01-07 00:23:48.988830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEvSeu/KLE+RaSxW7H5dUtfx8IJzko12PGo53WDlwQuJ) 2026-01-07 00:23:48.988954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPn/QrNwYfVPySfwCul63rQkmkTX5vJFXFxJrsNQu+x6hIw0JC/7dXLQo0OiEGbxlZ0QDhfw3QDZ8VMfsS69S8ZEzAgBNF/k8l21lAYSDPRipUZMzFNecxY7zZsk71MW6svheS+7HA8ZIHYE338MOy3rBBySMjjzSwpg5OqivBs/diog0t0PxY7hDI6egAxbcrmoSIuwUS2/d+avoUwopaast271G67MpWEpapFXibjiX8HM44/YfxAdLq/7YDhpHJBVaIHCo+vBMOWfkxTkvLkFlhuuoZuQPUlbZzpNf928NnVC/MpfnPvp5TDXx3si+OuDaz1+CiyrGglSGcQainoMQWRPMOG29p4D5UIQskidaN92ll4pHIhn4Mrc4XalH8cMpyG0LPQ7NyqCzkaNJqG5GB1rmDN8szOUGXNwneYY0yIwg3HS3JSjNEv2A4EaCxW5VBYeqsjB6g87qKsyamSzaO3cMkVqtNhfLJ66i8NIw8lpmoX5S90A2uSazbD48=) 2026-01-07 00:23:48.988974 | orchestrator | 2026-01-07 00:23:48.988988 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:48.989000 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:01.105) 0:00:13.081 ***** 2026-01-07 00:23:48.989012 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjFWHUWYMcqZycRUgr065Rj1ir6vFsDfx23a7zpwzAf4yD4CLnGImfSsvwr0Me1gCbzap7nvgMIt+L7akJTJO0ot8KU++Mm8yDdNeC2yUZe1aCmejxpZNslbmnkF0oSOnlQFx5BcKmxLFIBjiwydPoAA1knEd3LW+fMyobDmI2k4NJdeTtbI+WYqc3yYYfjVLoI2NtbVFJ0lAvxljWFUgEUAy9UBNpXUFs15j0PSTLqDL3W5PfsnUUHBUDZISoiZJWKCXcuhB4DyP2yo1kKXZph5UJWF947oCPC8+SUuCwegqZzn28gVmqs1mYmft2G+dJiXt2zVFC0xLF0+Ue+M3nRSCKqIAfdyDZmJLnK5A2tWrmBM9ur7SILDMmbkbDUso57Smzl44VDML59uepIjItXi9Svs/Z5GjJRMxbp2OMvni+9dC2c6kTsVBnIMkMBRVJNpQB5hNpl2qk0kpCDxcRpeQmK0sr3vmGGNllOkyUhB1ay43az2gBZ64AkPLqpNs=) 2026-01-07 00:23:48.989024 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7xyCBSFYLRfhR0zmTLTM8ju/Q065DsUhk03OyZbyipgIFYCJy8eA3OtrfVtT75YV0aZCvPTiFCzjOdngfEfVo=) 2026-01-07 00:23:48.989038 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP5EF/Mw5CrbxawanhLCjKiVCKdbx13VKXEBv5Lgo60u) 2026-01-07 00:23:48.989049 | orchestrator | 2026-01-07 00:23:48.989060 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-07 00:23:48.989072 | orchestrator | Wednesday 07 January 2026 00:23:39 +0000 (0:00:01.133) 0:00:14.214 ***** 2026-01-07 00:23:48.989084 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:23:48.989105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:23:48.989126 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:23:48.989145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:23:48.989165 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:23:48.989182 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:23:48.989201 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:23:48.989219 | orchestrator | 2026-01-07 00:23:48.989239 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-07 00:23:48.989291 | orchestrator | Wednesday 07 January 2026 00:23:44 +0000 (0:00:05.374) 0:00:19.589 ***** 2026-01-07 00:23:48.989337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:23:48.989355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:23:48.989369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:23:48.989382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:23:48.989395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:23:48.989408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:23:48.989421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:23:48.989434 | orchestrator | 2026-01-07 00:23:48.989466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:48.989480 | orchestrator | Wednesday 07 January 2026 00:23:44 +0000 (0:00:00.187) 0:00:19.776 ***** 2026-01-07 00:23:48.989493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrneTDn0wy8zSGKK4RpYNxZGfjneA3lMkNGma0M4TiLH0YmtPm/fpaPT5QO6JflASCRA4Gx4xZqP49AnbV5OUsOg075BlXm1izwsIXXlzgYpRpPVGVbCMXmdQwS6w3FMJ5BMCvUiCCb56IYs3WdHBSyUgndy8+MEmZupQAFaVhJheRxhOlOiBY9iTQ8n1XDWbyEXbvfH/j41aPHUPOf7oGL1Sb6sqTdpi7m9SfIzLsS/EFo/ZIbqhma0V2HCFfk6JZ/AHdZlcOIcXyumOkdxqn0yUkE/mBjA8evVOuyOouNPJCvX2ubezeatXQlaLNtc6/E3S2mzP2g4swf3HqbrSV5G/BHkyrI6RP2qPZuG6qVE3QbMN/FP0wfshC09JDphjlahjlKAlQyKVR8XVBmdAz0NcRGB2xjqOoWDjpvYfET6+8nxzvl2tlJi73SRiROnkZnigZllAAluFzHHCLdwTp7Amw4H2LpixupNVv8meGbb/FP0srjVtSEbw6G0Wl2aM=) 2026-01-07 00:23:48.989507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN03nnE8LXo9sFfmH1QvxhUUjklgoNzys51HGoeXPLb5f4KBBw9cMIgpNoeE0+XDN3OBTyLOaVfOCDqRdp90O2Y=) 2026-01-07 00:23:48.989521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIICVw9aJ/tKjDjnkBMvNlyvD5XbAAvRA4B0CYCTc1GPa) 2026-01-07 00:23:48.989532 | orchestrator | 2026-01-07 00:23:48.989546 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:48.989558 | orchestrator | Wednesday 07 January 2026 00:23:45 +0000 (0:00:01.096) 0:00:20.872 ***** 2026-01-07 00:23:48.989577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE8IaOhyiHRs6TqSDW2w1VGSGOZiAjGSZwJAErvT2MwaRRnSRvL6R/JadnVzyZZfF34yhu3BotHdpUKfHj2KlzpTVZGUAgKRqIPL6cnNtA0t1PstwM3CPi8Bgc708k5xN1RUqUkSaMuEVaJBb6CgF2ffuZ0VJtiX9cSUw1wIKDFbhtx8HpYIQfQtsRQXrj8fcLAd8FV62/GNJJ/kw/y7u2SaLJ7zJIfvHoT/9feK0aKGDM8UB54vMJANPY+hcT81oe4NMzgPctb903gQAeT1kb4lQo3xxD9An4MHIwTscy4qMH4h6cYMhL9Wvj1czrna314WSh9e2nPbfeg1q04B92F6H2vty3DHRgTU6tVsFmf/a45da6MEXJAHBGtBuTX2y/fPoSFpef/WKQceC4uEuq9395OJ+8oMK0PhPUT59Ued/fxSRrFSk/ZgSO96McJ3uUyhWENsbI0IMJ7LmJo7h4GvgVMw01KE9oNLarhGEwsx3VKig3x/F7lwS/prSnZk0=) 2026-01-07 00:23:48.989597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVvv9TvtF46RvNVGjLkKgJeyn9SINirv8pCPUNe6Ufcu2Wzd5M/DJKP9i5HYFFT1qN9WbTGt4zExOjPzGTgfLU=) 2026-01-07 00:23:48.989629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFtPjWFML+tS2StpLYDJ9yuKrYCxbC5EpHCyu3WVm+U) 2026-01-07 00:23:48.989722 | orchestrator | 2026-01-07 00:23:48.989745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:48.989765 | orchestrator | Wednesday 07 January 2026 00:23:46 +0000 (0:00:01.131) 0:00:22.003 ***** 2026-01-07 00:23:48.989778 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOmjh+yyXtqlpzxZiKck6MLd8RqC0SNua0RuJ0XyxcRUyQLuLf9PH9hZpqneIs4WfIIaVtGdZb4/aWfGVt4wMYE=) 2026-01-07 00:23:48.989790 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbG018ZyDV2ARD7MdEcILyHJmfkxRuz3an3Ne8jOrTp8g0LkpCOzfxBJUPDCtXSEEMCR3ZAJ9sc/ZHGeSGbtCfPWhAB1l2QsMmLEE5iXR1d4RZXWNQvB+C8uu2LDgYw0dq0U7O7hOnByZdXP4oXW1gz2m0N7ZIxdriU4PWNunlFet++/496rq0Kiv+MNUimHTRtY8OU256CckBpagcqLzCRM4HkzlkvGnqeAaaDNRmhwLKZSzayNn2WOnLHnyugXIMy4/KYel/yIKeWt/rIPDdV7IQmWa8rxcGjcbjJ+hCIAM2WyLHHk3hYbun0ePbawEGyAkIEX1p091xE0ltxZ18OSxL1g8wOo4NseRen3u8T8XEmGfipWioCQmFmj6bL1ON//lLrpcM1uiAakuykG5OqdtGsm3iMVN97pOofyLPVsdkCX2zXLrHkGDDEh6osifB62C9+ybFZF2t+cnxekeqMB5iM6jS+S1yIg8usm/Mima3NYuzuHFdLtcGKENvN60=) 2026-01-07 00:23:48.989802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsc/kZ0srNborj6e43LTGDyjpsK9hVo5oRMqPybIvZN) 2026-01-07 00:23:48.989813 | orchestrator | 2026-01-07 00:23:48.989824 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:48.989835 | orchestrator | Wednesday 07 January 2026 00:23:47 +0000 (0:00:01.092) 0:00:23.096 ***** 2026-01-07 00:23:48.989845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKP+TUNVhgNBfkfiS/GZwiCvQYEoYzU/Ag0aP1RJ/Vl2LQA4kVVTc4+Q/Dkg88trArDaag8sJ6MunfBRiOfH3U=) 2026-01-07 00:23:48.989878 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU2PSF+imLyIoliUIoBFXJlLAo4cOYyzR8TWSL0MOGZ3ZFpAoC4jRG74XinaIrcfAFaH/Az6VmUCk0LPtP3/k0jodlMpAExDowl6K1ntjziTHEPhYFHezg+5cSa4i6lflL0IxjcuBBv+2msxjRlVl7pPlzNCpQtUhpjNwBjel93arg3+rAGXWaoTl6CSbUycQ8mvfLHIT5u0C0GZgBw8mpoBXIobMMz9AUTAs2W+I2SlqY2Ot5QZbyGXoXEGUio2iprcF6xM8t4MoH373AzMToGR3zsIV9NNgphQSkjul8psLck6QnbWI8OMW/lwp3q0neYPfPzs3MJ8C0hYON/LBuBBCofz2d+djPPgAuvFh9R9QJcM2kEs/wBKL7UI6h4kgcCcdq+eDEgz581EzrHLGSXO/Zi3pvzxKDNquTYFzXPFlVxpwZyZ653LnZoyghYzqlIF8PrfGs4pHOaGv5mh7U+aFRMGzX2ipG2HNYQ5+7JNJq6Q7kNY89vPo0bIHcTec=) 2026-01-07 00:23:53.520041 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+hQe5/cGKbuywhd0RHYhvvVbJ9XJ6q4QcGNyWWjjL4) 2026-01-07 00:23:53.520157 | orchestrator | 2026-01-07 00:23:53.520193 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:53.520208 | orchestrator | Wednesday 07 January 2026 00:23:48 +0000 (0:00:01.077) 0:00:24.174 ***** 2026-01-07 00:23:53.520227 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgpoLb33tuKRwdJEprqujIFRw6qYwNPyDE3bIASUAZwLm7Bah8cnkGwGRQe4vXEHugTc4aoQq1ydiM9XZMyTwIxoWAosU8315DS3j+UYhgHJjIOaMPcVsjqZfQMLZKzYkwMcHMj1ckGPwv/EQA5bPL4CW8MF9wmsRHJ5ABm9Vlc4VG8X0hNg6Z3yZMk5K1yuuRqEsAcyJ/WKDC+v8EK75xElf+2Bzj0FYgFrqexOnlyEbpgoJ/Nhas8PM6gUgi318n3QnX6a8FhBsM4/pMc5b45lui+j4/tw7V/lHH3BNNAK6LWh+R097+iOBxnsSPwJM2qXDyTjnqEaTEpO4hbVvC6jdWAW3UOfHIapIkCjIMuZNLoXogQZA9gLM1lOEj+pa/R6ZMkJ0svMRAzDkMgyNMSs68y+AgmZaZFd67yBeBFqgXcfLNqsZ0inWAuz4eQ6YT/Z7cYwYl7/6Bi81AL7jcU1oJCo4Vreiucb7OSivIN24K7aK81bnoj1vPCK/t8Ts=) 2026-01-07 00:23:53.520243 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAa7V5hm+ljZMDsNviCOj+jmtR1jJXa9wUai03O+dWpg1AW/EywAZESLtnWwOcDNNE1KUXjtKDh9P6mWHXh1ES0=) 2026-01-07 00:23:53.520257 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILygH9BcJFvwCh/uiOxcjYo9NnrqWH7DA1okPro/j3KG) 2026-01-07 00:23:53.520292 | orchestrator | 2026-01-07 00:23:53.520305 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:53.520316 | orchestrator | Wednesday 07 January 2026 00:23:50 +0000 (0:00:01.081) 0:00:25.255 ***** 2026-01-07 00:23:53.520327 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPn/QrNwYfVPySfwCul63rQkmkTX5vJFXFxJrsNQu+x6hIw0JC/7dXLQo0OiEGbxlZ0QDhfw3QDZ8VMfsS69S8ZEzAgBNF/k8l21lAYSDPRipUZMzFNecxY7zZsk71MW6svheS+7HA8ZIHYE338MOy3rBBySMjjzSwpg5OqivBs/diog0t0PxY7hDI6egAxbcrmoSIuwUS2/d+avoUwopaast271G67MpWEpapFXibjiX8HM44/YfxAdLq/7YDhpHJBVaIHCo+vBMOWfkxTkvLkFlhuuoZuQPUlbZzpNf928NnVC/MpfnPvp5TDXx3si+OuDaz1+CiyrGglSGcQainoMQWRPMOG29p4D5UIQskidaN92ll4pHIhn4Mrc4XalH8cMpyG0LPQ7NyqCzkaNJqG5GB1rmDN8szOUGXNwneYY0yIwg3HS3JSjNEv2A4EaCxW5VBYeqsjB6g87qKsyamSzaO3cMkVqtNhfLJ66i8NIw8lpmoX5S90A2uSazbD48=) 2026-01-07 00:23:53.520339 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP/IN5d5dgw/EcRsg9kwIuwdfLxr7o+zZKGHk+cP2vABQQgr42TabkPWMW1iWCk6+ZRuQepPrEmaCAdQm5xJ6rg=) 2026-01-07 00:23:53.520350 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEvSeu/KLE+RaSxW7H5dUtfx8IJzko12PGo53WDlwQuJ) 2026-01-07 00:23:53.520361 | orchestrator | 2026-01-07 00:23:53.520372 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:53.520383 | orchestrator | Wednesday 07 January 2026 00:23:51 +0000 (0:00:01.072) 0:00:26.328 ***** 2026-01-07 00:23:53.520394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7xyCBSFYLRfhR0zmTLTM8ju/Q065DsUhk03OyZbyipgIFYCJy8eA3OtrfVtT75YV0aZCvPTiFCzjOdngfEfVo=) 2026-01-07 00:23:53.520405 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP5EF/Mw5CrbxawanhLCjKiVCKdbx13VKXEBv5Lgo60u) 2026-01-07 00:23:53.520417 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjFWHUWYMcqZycRUgr065Rj1ir6vFsDfx23a7zpwzAf4yD4CLnGImfSsvwr0Me1gCbzap7nvgMIt+L7akJTJO0ot8KU++Mm8yDdNeC2yUZe1aCmejxpZNslbmnkF0oSOnlQFx5BcKmxLFIBjiwydPoAA1knEd3LW+fMyobDmI2k4NJdeTtbI+WYqc3yYYfjVLoI2NtbVFJ0lAvxljWFUgEUAy9UBNpXUFs15j0PSTLqDL3W5PfsnUUHBUDZISoiZJWKCXcuhB4DyP2yo1kKXZph5UJWF947oCPC8+SUuCwegqZzn28gVmqs1mYmft2G+dJiXt2zVFC0xLF0+Ue+M3nRSCKqIAfdyDZmJLnK5A2tWrmBM9ur7SILDMmbkbDUso57Smzl44VDML59uepIjItXi9Svs/Z5GjJRMxbp2OMvni+9dC2c6kTsVBnIMkMBRVJNpQB5hNpl2qk0kpCDxcRpeQmK0sr3vmGGNllOkyUhB1ay43az2gBZ64AkPLqpNs=) 2026-01-07 00:23:53.520429 | orchestrator | 2026-01-07 00:23:53.520440 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-07 00:23:53.520451 | orchestrator | Wednesday 07 January 2026 00:23:52 +0000 (0:00:01.090) 0:00:27.418 ***** 2026-01-07 00:23:53.520462 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:23:53.520473 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:23:53.520484 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:23:53.520495 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:23:53.520505 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:23:53.520535 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:23:53.520547 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:23:53.520560 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:53.520573 | orchestrator | 2026-01-07 00:23:53.520586 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-07 00:23:53.520599 | orchestrator | Wednesday 07 January 2026 00:23:52 +0000 (0:00:00.193) 0:00:27.611 ***** 2026-01-07 00:23:53.520612 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:53.520624 | orchestrator | 2026-01-07 00:23:53.520677 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-07 00:23:53.520691 | orchestrator | Wednesday 07 January 2026 00:23:52 +0000 (0:00:00.066) 0:00:27.678 ***** 2026-01-07 00:23:53.520704 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:53.520717 | orchestrator | 2026-01-07 00:23:53.520745 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-07 00:23:53.520758 | orchestrator | Wednesday 07 January 2026 00:23:52 +0000 (0:00:00.061) 0:00:27.740 ***** 2026-01-07 00:23:53.520781 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:53.520793 | orchestrator | 2026-01-07 00:23:53.520806 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:23:53.520819 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:23:53.520833 | orchestrator | 2026-01-07 00:23:53.520845 | orchestrator | 2026-01-07 00:23:53.520858 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:23:53.520870 | orchestrator | Wednesday 07 January 2026 00:23:53 +0000 (0:00:00.761) 0:00:28.501 ***** 2026-01-07 00:23:53.520883 | orchestrator | =============================================================================== 2026-01-07 00:23:53.520895 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.03s 2026-01-07 00:23:53.520908 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2026-01-07 00:23:53.520919 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-01-07 00:23:53.520930 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-07 00:23:53.520940 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-07 00:23:53.520951 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-07 00:23:53.520962 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-07 00:23:53.520972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-07 00:23:53.520983 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-07 00:23:53.520994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-07 00:23:53.521004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-07 00:23:53.521015 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-07 00:23:53.521025 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-07 00:23:53.521036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-07 00:23:53.521047 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-07 00:23:53.521057 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-07 00:23:53.521068 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.76s 2026-01-07 00:23:53.521085 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-01-07 00:23:53.521096 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-01-07 00:23:53.521108 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-07 00:23:53.846989 | orchestrator | + osism apply squid 2026-01-07 00:24:05.974278 | orchestrator | 2026-01-07 00:24:05 | INFO  | Task 5810c018-8a5b-4ccf-9c9e-2c817dd20b2b (squid) was prepared for execution. 2026-01-07 00:24:05.974412 | orchestrator | 2026-01-07 00:24:05 | INFO  | It takes a moment until task 5810c018-8a5b-4ccf-9c9e-2c817dd20b2b (squid) has been started and output is visible here. 2026-01-07 00:26:02.346970 | orchestrator | 2026-01-07 00:26:02.347082 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-07 00:26:02.347122 | orchestrator | 2026-01-07 00:26:02.347133 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-07 00:26:02.347142 | orchestrator | Wednesday 07 January 2026 00:24:10 +0000 (0:00:00.187) 0:00:00.187 ***** 2026-01-07 00:26:02.347151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:26:02.347161 | orchestrator | 2026-01-07 00:26:02.347170 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-07 00:26:02.347179 | orchestrator | Wednesday 07 January 2026 00:24:10 +0000 (0:00:00.082) 0:00:00.269 ***** 2026-01-07 00:26:02.347188 | orchestrator | ok: [testbed-manager] 2026-01-07 00:26:02.347197 | orchestrator | 2026-01-07 00:26:02.347206 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-07 00:26:02.347214 | orchestrator | Wednesday 07 January 2026 00:24:12 +0000 (0:00:01.613) 0:00:01.883 ***** 2026-01-07 00:26:02.347224 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-07 00:26:02.347232 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-07 00:26:02.347241 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-07 00:26:02.347250 | orchestrator | 2026-01-07 00:26:02.347258 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-07 00:26:02.347267 | orchestrator | Wednesday 07 January 2026 00:24:13 +0000 (0:00:01.191) 0:00:03.075 ***** 2026-01-07 00:26:02.347276 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-07 00:26:02.347285 | orchestrator | 2026-01-07 00:26:02.347293 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-07 00:26:02.347302 | orchestrator | Wednesday 07 January 2026 00:24:14 +0000 (0:00:01.093) 0:00:04.168 ***** 2026-01-07 00:26:02.347311 | orchestrator | ok: [testbed-manager] 2026-01-07 00:26:02.347319 | orchestrator | 2026-01-07 00:26:02.347328 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-07 00:26:02.347349 | orchestrator | Wednesday 07 January 2026 00:24:14 +0000 (0:00:00.376) 0:00:04.544 ***** 2026-01-07 00:26:02.347358 | orchestrator | changed: [testbed-manager] 2026-01-07 00:26:02.347367 | orchestrator | 2026-01-07 00:26:02.347375 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-07 00:26:02.347384 | orchestrator | Wednesday 07 January 2026 00:24:15 +0000 (0:00:00.950) 0:00:05.495 ***** 2026-01-07 00:26:02.347393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-07 00:26:02.347402 | orchestrator | ok: [testbed-manager] 2026-01-07 00:26:02.347411 | orchestrator | 2026-01-07 00:26:02.347419 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-07 00:26:02.347428 | orchestrator | Wednesday 07 January 2026 00:24:49 +0000 (0:00:33.575) 0:00:39.070 ***** 2026-01-07 00:26:02.347437 | orchestrator | changed: [testbed-manager] 2026-01-07 00:26:02.347446 | orchestrator | 2026-01-07 00:26:02.347454 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-07 00:26:02.347463 | orchestrator | Wednesday 07 January 2026 00:25:01 +0000 (0:00:12.019) 0:00:51.090 ***** 2026-01-07 00:26:02.347472 | orchestrator | Pausing for 60 seconds 2026-01-07 00:26:02.347480 | orchestrator | changed: [testbed-manager] 2026-01-07 00:26:02.347489 | orchestrator | 2026-01-07 00:26:02.347498 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-07 00:26:02.347508 | orchestrator | Wednesday 07 January 2026 00:26:01 +0000 (0:01:00.089) 0:01:51.180 ***** 2026-01-07 00:26:02.347519 | orchestrator | ok: [testbed-manager] 2026-01-07 00:26:02.347529 | orchestrator | 2026-01-07 00:26:02.347539 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-07 00:26:02.347549 | orchestrator | Wednesday 07 January 2026 00:26:01 +0000 (0:00:00.062) 0:01:51.243 ***** 2026-01-07 00:26:02.347559 | orchestrator | changed: [testbed-manager] 2026-01-07 00:26:02.347569 | orchestrator | 2026-01-07 00:26:02.347579 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:26:02.347596 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:26:02.347607 | orchestrator | 2026-01-07 00:26:02.347618 | orchestrator | 2026-01-07 00:26:02.347660 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:26:02.347858 | orchestrator | Wednesday 07 January 2026 00:26:02 +0000 (0:00:00.643) 0:01:51.886 ***** 2026-01-07 00:26:02.347916 | orchestrator | =============================================================================== 2026-01-07 00:26:02.347927 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-07 00:26:02.347936 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.58s 2026-01-07 00:26:02.347945 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.02s 2026-01-07 00:26:02.348000 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.61s 2026-01-07 00:26:02.348016 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-01-07 00:26:02.348031 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2026-01-07 00:26:02.348044 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-01-07 00:26:02.348058 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-01-07 00:26:02.348072 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-07 00:26:02.348085 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-07 00:26:02.348098 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-07 00:26:02.702942 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:26:02.703046 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-07 00:26:02.711151 | orchestrator | + set -e 2026-01-07 00:26:02.711190 | orchestrator | + NAMESPACE=kolla 2026-01-07 00:26:02.711204 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-07 00:26:02.714854 | orchestrator | ++ semver latest 9.0.0 2026-01-07 00:26:02.773056 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-07 00:26:02.773142 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:26:02.774122 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-07 00:26:14.870360 | orchestrator | 2026-01-07 00:26:14 | INFO  | Task b20d2538-d0b2-485c-b468-91302ba2da45 (operator) was prepared for execution. 2026-01-07 00:26:14.870500 | orchestrator | 2026-01-07 00:26:14 | INFO  | It takes a moment until task b20d2538-d0b2-485c-b468-91302ba2da45 (operator) has been started and output is visible here. 2026-01-07 00:26:31.862114 | orchestrator | 2026-01-07 00:26:31.862371 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-07 00:26:31.862395 | orchestrator | 2026-01-07 00:26:31.862408 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:26:31.862420 | orchestrator | Wednesday 07 January 2026 00:26:19 +0000 (0:00:00.140) 0:00:00.140 ***** 2026-01-07 00:26:31.862432 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:31.862444 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:31.862456 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:31.862468 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:31.862478 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:31.862489 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:31.862501 | orchestrator | 2026-01-07 00:26:31.862517 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-07 00:26:31.862528 | orchestrator | Wednesday 07 January 2026 00:26:22 +0000 (0:00:03.545) 0:00:03.685 ***** 2026-01-07 00:26:31.862539 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:31.862550 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:31.862561 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:31.862572 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:31.862583 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:31.862617 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:31.862669 | orchestrator | 2026-01-07 00:26:31.862681 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-07 00:26:31.862825 | orchestrator | 2026-01-07 00:26:31.862838 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:26:31.862849 | orchestrator | Wednesday 07 January 2026 00:26:23 +0000 (0:00:00.797) 0:00:04.482 ***** 2026-01-07 00:26:31.862860 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:31.862871 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:31.862882 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:31.862893 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:31.862903 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:31.862914 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:31.862925 | orchestrator | 2026-01-07 00:26:31.862936 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:26:31.862947 | orchestrator | Wednesday 07 January 2026 00:26:23 +0000 (0:00:00.170) 0:00:04.653 ***** 2026-01-07 00:26:31.862958 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:31.862969 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:31.862980 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:31.862990 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:31.863001 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:31.863011 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:31.863022 | orchestrator | 2026-01-07 00:26:31.863033 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:26:31.863044 | orchestrator | Wednesday 07 January 2026 00:26:23 +0000 (0:00:00.189) 0:00:04.843 ***** 2026-01-07 00:26:31.863055 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:31.863067 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:31.863078 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:31.863088 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:31.863099 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:31.863110 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:31.863121 | orchestrator | 2026-01-07 00:26:31.863132 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:26:31.863160 | orchestrator | Wednesday 07 January 2026 00:26:24 +0000 (0:00:00.640) 0:00:05.484 ***** 2026-01-07 00:26:31.863172 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:31.863183 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:31.863194 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:31.863205 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:31.863222 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:31.863241 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:31.863260 | orchestrator | 2026-01-07 00:26:31.863280 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:26:31.863298 | orchestrator | Wednesday 07 January 2026 00:26:25 +0000 (0:00:01.319) 0:00:06.803 ***** 2026-01-07 00:26:31.863317 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-07 00:26:31.863334 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-07 00:26:31.863353 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-07 00:26:31.863373 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-07 00:26:31.863391 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-07 00:26:31.863411 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-07 00:26:31.863431 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-07 00:26:31.863450 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-07 00:26:31.863469 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-07 00:26:31.863489 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-07 00:26:31.863509 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-07 00:26:31.863524 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-07 00:26:31.863535 | orchestrator | 2026-01-07 00:26:31.863546 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:26:31.863583 | orchestrator | Wednesday 07 January 2026 00:26:27 +0000 (0:00:01.319) 0:00:08.123 ***** 2026-01-07 00:26:31.863594 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:31.863604 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:31.863615 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:31.863661 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:31.863673 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:31.863683 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:31.863694 | orchestrator | 2026-01-07 00:26:31.863705 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:26:31.863717 | orchestrator | Wednesday 07 January 2026 00:26:28 +0000 (0:00:01.255) 0:00:09.379 ***** 2026-01-07 00:26:31.863728 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-07 00:26:31.863739 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-07 00:26:31.863750 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-07 00:26:31.863761 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863794 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863806 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863817 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863828 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863838 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:31.863849 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863860 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863870 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863881 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863892 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863902 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:31.863913 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863924 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863935 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863953 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863964 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863975 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:31.863986 | orchestrator | 2026-01-07 00:26:31.863996 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:26:31.864008 | orchestrator | Wednesday 07 January 2026 00:26:29 +0000 (0:00:01.299) 0:00:10.679 ***** 2026-01-07 00:26:31.864019 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:31.864030 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:31.864040 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:31.864051 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:31.864062 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:31.864072 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:31.864083 | orchestrator | 2026-01-07 00:26:31.864094 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:26:31.864105 | orchestrator | Wednesday 07 January 2026 00:26:29 +0000 (0:00:00.149) 0:00:10.828 ***** 2026-01-07 00:26:31.864115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:31.864126 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:31.864137 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:31.864147 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:31.864167 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:31.864177 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:31.864188 | orchestrator | 2026-01-07 00:26:31.864199 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:26:31.864210 | orchestrator | Wednesday 07 January 2026 00:26:29 +0000 (0:00:00.189) 0:00:11.017 ***** 2026-01-07 00:26:31.864221 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:31.864232 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:31.864243 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:31.864253 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:31.864264 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:31.864275 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:31.864285 | orchestrator | 2026-01-07 00:26:31.864296 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:26:31.864307 | orchestrator | Wednesday 07 January 2026 00:26:30 +0000 (0:00:00.616) 0:00:11.633 ***** 2026-01-07 00:26:31.864318 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:31.864329 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:31.864339 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:31.864350 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:31.864360 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:31.864371 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:31.864382 | orchestrator | 2026-01-07 00:26:31.864393 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:26:31.864406 | orchestrator | Wednesday 07 January 2026 00:26:30 +0000 (0:00:00.177) 0:00:11.810 ***** 2026-01-07 00:26:31.864425 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:26:31.864444 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:26:31.864462 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:26:31.864480 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:31.864498 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:31.864515 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:26:31.864532 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:26:31.864549 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:31.864568 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:31.864587 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:31.864604 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:26:31.864693 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:31.864710 | orchestrator | 2026-01-07 00:26:31.864721 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:26:31.864732 | orchestrator | Wednesday 07 January 2026 00:26:31 +0000 (0:00:00.806) 0:00:12.617 ***** 2026-01-07 00:26:31.864743 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:31.864754 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:31.864765 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:31.864775 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:31.864786 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:31.864797 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:31.864808 | orchestrator | 2026-01-07 00:26:31.864819 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:26:31.864830 | orchestrator | Wednesday 07 January 2026 00:26:31 +0000 (0:00:00.159) 0:00:12.777 ***** 2026-01-07 00:26:31.864840 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:31.864851 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:31.864862 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:31.864873 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:31.864895 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:33.251874 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:33.251973 | orchestrator | 2026-01-07 00:26:33.251988 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:26:33.252000 | orchestrator | Wednesday 07 January 2026 00:26:31 +0000 (0:00:00.162) 0:00:12.939 ***** 2026-01-07 00:26:33.252036 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:33.252047 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:33.252056 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:33.252066 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:33.252076 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:33.252085 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:33.252095 | orchestrator | 2026-01-07 00:26:33.252105 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:26:33.252115 | orchestrator | Wednesday 07 January 2026 00:26:32 +0000 (0:00:00.165) 0:00:13.105 ***** 2026-01-07 00:26:33.252125 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:33.252134 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:33.252143 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:33.252153 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:33.252162 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:33.252172 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:33.252181 | orchestrator | 2026-01-07 00:26:33.252191 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:26:33.252201 | orchestrator | Wednesday 07 January 2026 00:26:32 +0000 (0:00:00.690) 0:00:13.795 ***** 2026-01-07 00:26:33.252211 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:33.252220 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:33.252230 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:33.252239 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:33.252249 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:33.252258 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:33.252268 | orchestrator | 2026-01-07 00:26:33.252277 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:26:33.252288 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252299 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252309 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252318 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252328 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252356 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:33.252367 | orchestrator | 2026-01-07 00:26:33.252376 | orchestrator | 2026-01-07 00:26:33.252386 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:26:33.252395 | orchestrator | Wednesday 07 January 2026 00:26:32 +0000 (0:00:00.240) 0:00:14.036 ***** 2026-01-07 00:26:33.252405 | orchestrator | =============================================================================== 2026-01-07 00:26:33.252415 | orchestrator | Gathering Facts --------------------------------------------------------- 3.55s 2026-01-07 00:26:33.252426 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.32s 2026-01-07 00:26:33.252436 | orchestrator | osism.commons.operator : Create user ------------------------------------ 1.32s 2026-01-07 00:26:33.252448 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-01-07 00:26:33.252459 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-01-07 00:26:33.252470 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2026-01-07 00:26:33.252481 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-01-07 00:26:33.252499 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-01-07 00:26:33.252510 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-01-07 00:26:33.252521 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-01-07 00:26:33.252532 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-01-07 00:26:33.252543 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-07 00:26:33.252555 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-07 00:26:33.252566 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-01-07 00:26:33.252577 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-01-07 00:26:33.252588 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-01-07 00:26:33.252599 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-01-07 00:26:33.252610 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-07 00:26:33.252621 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-01-07 00:26:33.621143 | orchestrator | + osism apply --environment custom facts 2026-01-07 00:26:35.591867 | orchestrator | 2026-01-07 00:26:35 | INFO  | Trying to run play facts in environment custom 2026-01-07 00:26:45.751091 | orchestrator | 2026-01-07 00:26:45 | INFO  | Task aa72f835-9288-4e72-84da-c184ab645b78 (facts) was prepared for execution. 2026-01-07 00:26:45.751232 | orchestrator | 2026-01-07 00:26:45 | INFO  | It takes a moment until task aa72f835-9288-4e72-84da-c184ab645b78 (facts) has been started and output is visible here. 2026-01-07 00:27:34.523581 | orchestrator | 2026-01-07 00:27:34.523805 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-07 00:27:34.523826 | orchestrator | 2026-01-07 00:27:34.523839 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:27:34.523851 | orchestrator | Wednesday 07 January 2026 00:26:49 +0000 (0:00:00.085) 0:00:00.085 ***** 2026-01-07 00:27:34.523862 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:34.523874 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.523886 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:34.523897 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:34.523908 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.523937 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:34.523948 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.523959 | orchestrator | 2026-01-07 00:27:34.523971 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-07 00:27:34.523982 | orchestrator | Wednesday 07 January 2026 00:26:51 +0000 (0:00:01.405) 0:00:01.491 ***** 2026-01-07 00:27:34.523992 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:34.524003 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:34.524015 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.524025 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.524039 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:34.524051 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.524066 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:34.524078 | orchestrator | 2026-01-07 00:27:34.524091 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-07 00:27:34.524103 | orchestrator | 2026-01-07 00:27:34.524116 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:27:34.524129 | orchestrator | Wednesday 07 January 2026 00:26:52 +0000 (0:00:01.295) 0:00:02.786 ***** 2026-01-07 00:27:34.524142 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.524154 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.524168 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.524201 | orchestrator | 2026-01-07 00:27:34.524214 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:27:34.524227 | orchestrator | Wednesday 07 January 2026 00:26:52 +0000 (0:00:00.116) 0:00:02.903 ***** 2026-01-07 00:27:34.524240 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.524253 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.524265 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.524277 | orchestrator | 2026-01-07 00:27:34.524290 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:27:34.524303 | orchestrator | Wednesday 07 January 2026 00:26:52 +0000 (0:00:00.209) 0:00:03.112 ***** 2026-01-07 00:27:34.524315 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.524327 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.524340 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.524352 | orchestrator | 2026-01-07 00:27:34.524365 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:27:34.524378 | orchestrator | Wednesday 07 January 2026 00:26:53 +0000 (0:00:00.223) 0:00:03.335 ***** 2026-01-07 00:27:34.524392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:27:34.524406 | orchestrator | 2026-01-07 00:27:34.524420 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:27:34.524433 | orchestrator | Wednesday 07 January 2026 00:26:53 +0000 (0:00:00.152) 0:00:03.488 ***** 2026-01-07 00:27:34.524443 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.524454 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.524465 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.524476 | orchestrator | 2026-01-07 00:27:34.524487 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:27:34.524497 | orchestrator | Wednesday 07 January 2026 00:26:53 +0000 (0:00:00.446) 0:00:03.934 ***** 2026-01-07 00:27:34.524508 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:34.524519 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:34.524530 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:34.524541 | orchestrator | 2026-01-07 00:27:34.524551 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:27:34.524562 | orchestrator | Wednesday 07 January 2026 00:26:53 +0000 (0:00:00.151) 0:00:04.086 ***** 2026-01-07 00:27:34.524573 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.524584 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.524595 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.524605 | orchestrator | 2026-01-07 00:27:34.524616 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:27:34.524670 | orchestrator | Wednesday 07 January 2026 00:26:55 +0000 (0:00:01.135) 0:00:05.221 ***** 2026-01-07 00:27:34.524682 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.524692 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.524703 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.524714 | orchestrator | 2026-01-07 00:27:34.524725 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:27:34.524737 | orchestrator | Wednesday 07 January 2026 00:26:55 +0000 (0:00:00.573) 0:00:05.794 ***** 2026-01-07 00:27:34.524757 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.524777 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.524795 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.524814 | orchestrator | 2026-01-07 00:27:34.524833 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:27:34.524851 | orchestrator | Wednesday 07 January 2026 00:26:56 +0000 (0:00:01.101) 0:00:06.896 ***** 2026-01-07 00:27:34.524870 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.524890 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.524909 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.524929 | orchestrator | 2026-01-07 00:27:34.524950 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-07 00:27:34.524983 | orchestrator | Wednesday 07 January 2026 00:27:13 +0000 (0:00:16.870) 0:00:23.767 ***** 2026-01-07 00:27:34.524995 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:34.525006 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:34.525017 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:34.525028 | orchestrator | 2026-01-07 00:27:34.525039 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-07 00:27:34.525073 | orchestrator | Wednesday 07 January 2026 00:27:13 +0000 (0:00:00.113) 0:00:23.881 ***** 2026-01-07 00:27:34.525084 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:34.525095 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:34.525106 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:34.525117 | orchestrator | 2026-01-07 00:27:34.525127 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:27:34.525138 | orchestrator | Wednesday 07 January 2026 00:27:22 +0000 (0:00:08.896) 0:00:32.777 ***** 2026-01-07 00:27:34.525149 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.525160 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.525171 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.525182 | orchestrator | 2026-01-07 00:27:34.525193 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:27:34.525204 | orchestrator | Wednesday 07 January 2026 00:27:23 +0000 (0:00:00.472) 0:00:33.250 ***** 2026-01-07 00:27:34.525215 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-07 00:27:34.525226 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-07 00:27:34.525237 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-07 00:27:34.525248 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:34.525259 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:34.525270 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:34.525281 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:34.525291 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:34.525302 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:34.525313 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:34.525324 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:34.525334 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:34.525345 | orchestrator | 2026-01-07 00:27:34.525356 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:27:34.525367 | orchestrator | Wednesday 07 January 2026 00:27:26 +0000 (0:00:03.933) 0:00:37.183 ***** 2026-01-07 00:27:34.525378 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.525388 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.525399 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.525410 | orchestrator | 2026-01-07 00:27:34.525421 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:27:34.525432 | orchestrator | 2026-01-07 00:27:34.525443 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:27:34.525453 | orchestrator | Wednesday 07 January 2026 00:27:28 +0000 (0:00:01.612) 0:00:38.795 ***** 2026-01-07 00:27:34.525464 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:34.525475 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:34.525485 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:34.525496 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:34.525507 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:34.525517 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:34.525528 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:34.525539 | orchestrator | 2026-01-07 00:27:34.525550 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:27:34.525571 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:34.525583 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:34.525595 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:34.525606 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:34.525689 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:34.525704 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:34.525716 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:34.525727 | orchestrator | 2026-01-07 00:27:34.525738 | orchestrator | 2026-01-07 00:27:34.525749 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:27:34.525760 | orchestrator | Wednesday 07 January 2026 00:27:34 +0000 (0:00:05.901) 0:00:44.697 ***** 2026-01-07 00:27:34.525771 | orchestrator | =============================================================================== 2026-01-07 00:27:34.525782 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.87s 2026-01-07 00:27:34.525792 | orchestrator | Install required packages (Debian) -------------------------------------- 8.90s 2026-01-07 00:27:34.525803 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.90s 2026-01-07 00:27:34.525814 | orchestrator | Copy fact files --------------------------------------------------------- 3.93s 2026-01-07 00:27:34.525825 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.61s 2026-01-07 00:27:34.525836 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-01-07 00:27:34.525854 | orchestrator | Copy fact file ---------------------------------------------------------- 1.30s 2026-01-07 00:27:34.761251 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.14s 2026-01-07 00:27:34.761511 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-01-07 00:27:34.761543 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.57s 2026-01-07 00:27:34.761563 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-01-07 00:27:34.761651 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-01-07 00:27:34.761749 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-07 00:27:34.761770 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-07 00:27:34.761789 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-07 00:27:34.761808 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-07 00:27:34.761846 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-01-07 00:27:34.761867 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-01-07 00:27:35.071323 | orchestrator | + osism apply bootstrap 2026-01-07 00:27:47.357217 | orchestrator | 2026-01-07 00:27:47 | INFO  | Task 5a2b1732-646a-49af-9f2e-6011668d4012 (bootstrap) was prepared for execution. 2026-01-07 00:27:47.357350 | orchestrator | 2026-01-07 00:27:47 | INFO  | It takes a moment until task 5a2b1732-646a-49af-9f2e-6011668d4012 (bootstrap) has been started and output is visible here. 2026-01-07 00:28:04.804385 | orchestrator | 2026-01-07 00:28:04.804568 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:28:04.804586 | orchestrator | 2026-01-07 00:28:04.804598 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:28:04.804609 | orchestrator | Wednesday 07 January 2026 00:27:51 +0000 (0:00:00.157) 0:00:00.157 ***** 2026-01-07 00:28:04.804695 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:04.804716 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:04.804735 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:04.804754 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:04.804773 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:04.804791 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:04.804811 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:04.804831 | orchestrator | 2026-01-07 00:28:04.804850 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:28:04.804864 | orchestrator | 2026-01-07 00:28:04.804875 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:28:04.804886 | orchestrator | Wednesday 07 January 2026 00:27:52 +0000 (0:00:00.287) 0:00:00.444 ***** 2026-01-07 00:28:04.804897 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:04.804908 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:04.804919 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:04.804932 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:04.804946 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:04.804958 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:04.804971 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:04.804983 | orchestrator | 2026-01-07 00:28:04.804996 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-07 00:28:04.805008 | orchestrator | 2026-01-07 00:28:04.805021 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:28:04.805034 | orchestrator | Wednesday 07 January 2026 00:27:55 +0000 (0:00:03.697) 0:00:04.141 ***** 2026-01-07 00:28:04.805046 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:28:04.805060 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:28:04.805073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-07 00:28:04.805085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:28:04.805097 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:28:04.805109 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-07 00:28:04.805123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:28:04.805135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:28:04.805148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:28:04.805160 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-07 00:28:04.805173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-07 00:28:04.805186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:28:04.805198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-07 00:28:04.805208 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:28:04.805219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:28:04.805229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-07 00:28:04.805240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-07 00:28:04.805250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-07 00:28:04.805261 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:28:04.805271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:28:04.805282 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:04.805293 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-07 00:28:04.805303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-07 00:28:04.805324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-07 00:28:04.805334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:28:04.805345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:28:04.805355 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:04.805366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-07 00:28:04.805377 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-07 00:28:04.805387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:28:04.805398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-07 00:28:04.805408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-07 00:28:04.805433 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:28:04.805444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-07 00:28:04.805454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:28:04.805465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:28:04.805476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:28:04.805486 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-07 00:28:04.805497 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:04.805507 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-07 00:28:04.805518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:28:04.805528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:28:04.805538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:28:04.805549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-07 00:28:04.805560 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:04.805571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:28:04.805601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:28:04.805612 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:04.805651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-07 00:28:04.805662 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:28:04.805679 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:04.805697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-07 00:28:04.805715 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:28:04.805733 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:28:04.805753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:28:04.805771 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:04.805787 | orchestrator | 2026-01-07 00:28:04.805798 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-07 00:28:04.805809 | orchestrator | 2026-01-07 00:28:04.805819 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-07 00:28:04.805830 | orchestrator | Wednesday 07 January 2026 00:27:56 +0000 (0:00:00.537) 0:00:04.679 ***** 2026-01-07 00:28:04.805841 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:04.805851 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:04.805862 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:04.805872 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:04.805883 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:04.805894 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:04.805904 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:04.805915 | orchestrator | 2026-01-07 00:28:04.805926 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-07 00:28:04.805936 | orchestrator | Wednesday 07 January 2026 00:27:58 +0000 (0:00:02.270) 0:00:06.950 ***** 2026-01-07 00:28:04.805947 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:04.805958 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:04.805976 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:04.805986 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:04.805997 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:04.806008 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:04.806078 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:04.806092 | orchestrator | 2026-01-07 00:28:04.806103 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-07 00:28:04.806114 | orchestrator | Wednesday 07 January 2026 00:27:59 +0000 (0:00:01.202) 0:00:08.152 ***** 2026-01-07 00:28:04.806126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:04.806140 | orchestrator | 2026-01-07 00:28:04.806151 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-07 00:28:04.806162 | orchestrator | Wednesday 07 January 2026 00:28:00 +0000 (0:00:00.281) 0:00:08.434 ***** 2026-01-07 00:28:04.806172 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:04.806183 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:04.806194 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:04.806204 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:04.806215 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:04.806226 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:04.806236 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:04.806247 | orchestrator | 2026-01-07 00:28:04.806258 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-07 00:28:04.806269 | orchestrator | Wednesday 07 January 2026 00:28:02 +0000 (0:00:02.132) 0:00:10.566 ***** 2026-01-07 00:28:04.806279 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:04.806292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:04.806305 | orchestrator | 2026-01-07 00:28:04.806316 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-07 00:28:04.806327 | orchestrator | Wednesday 07 January 2026 00:28:02 +0000 (0:00:00.308) 0:00:10.875 ***** 2026-01-07 00:28:04.806337 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:04.806348 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:04.806359 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:04.806369 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:04.806380 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:04.806390 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:04.806401 | orchestrator | 2026-01-07 00:28:04.806412 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-07 00:28:04.806423 | orchestrator | Wednesday 07 January 2026 00:28:03 +0000 (0:00:01.094) 0:00:11.969 ***** 2026-01-07 00:28:04.806433 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:04.806444 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:04.806455 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:04.806465 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:04.806476 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:04.806486 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:04.806497 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:04.806508 | orchestrator | 2026-01-07 00:28:04.806519 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-07 00:28:04.806529 | orchestrator | Wednesday 07 January 2026 00:28:04 +0000 (0:00:00.625) 0:00:12.595 ***** 2026-01-07 00:28:04.806540 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:04.806551 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:04.806561 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:04.806572 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:04.806582 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:04.806600 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:04.806611 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:04.806648 | orchestrator | 2026-01-07 00:28:04.806664 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:28:04.806694 | orchestrator | Wednesday 07 January 2026 00:28:04 +0000 (0:00:00.428) 0:00:13.024 ***** 2026-01-07 00:28:04.806715 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:04.806733 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:04.806763 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:18.340441 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:18.340560 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:18.340571 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:18.340578 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:18.340586 | orchestrator | 2026-01-07 00:28:18.340595 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:28:18.340604 | orchestrator | Wednesday 07 January 2026 00:28:04 +0000 (0:00:00.233) 0:00:13.258 ***** 2026-01-07 00:28:18.340634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:18.340657 | orchestrator | 2026-01-07 00:28:18.340665 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:28:18.340673 | orchestrator | Wednesday 07 January 2026 00:28:05 +0000 (0:00:00.322) 0:00:13.581 ***** 2026-01-07 00:28:18.340681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:18.340688 | orchestrator | 2026-01-07 00:28:18.340695 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:28:18.340702 | orchestrator | Wednesday 07 January 2026 00:28:05 +0000 (0:00:00.321) 0:00:13.902 ***** 2026-01-07 00:28:18.340708 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.340716 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.340723 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.340730 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.340736 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.340743 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.340749 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.340756 | orchestrator | 2026-01-07 00:28:18.340763 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:28:18.340770 | orchestrator | Wednesday 07 January 2026 00:28:07 +0000 (0:00:01.572) 0:00:15.474 ***** 2026-01-07 00:28:18.340777 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:18.340784 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:18.340791 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:18.340798 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:18.340805 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:18.340812 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:18.340820 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:18.340827 | orchestrator | 2026-01-07 00:28:18.340834 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:28:18.340841 | orchestrator | Wednesday 07 January 2026 00:28:07 +0000 (0:00:00.225) 0:00:15.700 ***** 2026-01-07 00:28:18.340848 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.340855 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.340862 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.340869 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.340876 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.340883 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.340890 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.340897 | orchestrator | 2026-01-07 00:28:18.340904 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:28:18.340935 | orchestrator | Wednesday 07 January 2026 00:28:07 +0000 (0:00:00.607) 0:00:16.307 ***** 2026-01-07 00:28:18.340944 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:18.340954 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:18.340962 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:18.340970 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:18.340984 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:18.340995 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:18.341003 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:18.341011 | orchestrator | 2026-01-07 00:28:18.341019 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:28:18.341029 | orchestrator | Wednesday 07 January 2026 00:28:08 +0000 (0:00:00.365) 0:00:16.673 ***** 2026-01-07 00:28:18.341044 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:18.341058 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341072 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:18.341082 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:18.341090 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:18.341098 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:18.341107 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:18.341115 | orchestrator | 2026-01-07 00:28:18.341124 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:28:18.341143 | orchestrator | Wednesday 07 January 2026 00:28:08 +0000 (0:00:00.525) 0:00:17.198 ***** 2026-01-07 00:28:18.341152 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341161 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:18.341170 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:18.341178 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:18.341185 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:18.341193 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:18.341199 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:18.341208 | orchestrator | 2026-01-07 00:28:18.341218 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:28:18.341230 | orchestrator | Wednesday 07 January 2026 00:28:09 +0000 (0:00:01.150) 0:00:18.348 ***** 2026-01-07 00:28:18.341240 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341248 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341256 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341263 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341272 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341281 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341288 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341295 | orchestrator | 2026-01-07 00:28:18.341302 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:28:18.341309 | orchestrator | Wednesday 07 January 2026 00:28:11 +0000 (0:00:01.065) 0:00:19.414 ***** 2026-01-07 00:28:18.341334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:18.341342 | orchestrator | 2026-01-07 00:28:18.341349 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:28:18.341354 | orchestrator | Wednesday 07 January 2026 00:28:11 +0000 (0:00:00.289) 0:00:19.704 ***** 2026-01-07 00:28:18.341360 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:18.341365 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:18.341372 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:18.341377 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:18.341383 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:18.341388 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:18.341394 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:18.341399 | orchestrator | 2026-01-07 00:28:18.341405 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:28:18.341417 | orchestrator | Wednesday 07 January 2026 00:28:13 +0000 (0:00:02.227) 0:00:21.931 ***** 2026-01-07 00:28:18.341422 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341428 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341435 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341442 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341448 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341455 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341462 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341468 | orchestrator | 2026-01-07 00:28:18.341475 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:28:18.341481 | orchestrator | Wednesday 07 January 2026 00:28:13 +0000 (0:00:00.234) 0:00:22.166 ***** 2026-01-07 00:28:18.341488 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341495 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341502 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341508 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341515 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341522 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341529 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341535 | orchestrator | 2026-01-07 00:28:18.341542 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:28:18.341549 | orchestrator | Wednesday 07 January 2026 00:28:14 +0000 (0:00:00.254) 0:00:22.420 ***** 2026-01-07 00:28:18.341555 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341562 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341569 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341576 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341583 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341589 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341596 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341603 | orchestrator | 2026-01-07 00:28:18.341610 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:28:18.341647 | orchestrator | Wednesday 07 January 2026 00:28:14 +0000 (0:00:00.276) 0:00:22.697 ***** 2026-01-07 00:28:18.341656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:18.341665 | orchestrator | 2026-01-07 00:28:18.341671 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:28:18.341677 | orchestrator | Wednesday 07 January 2026 00:28:14 +0000 (0:00:00.321) 0:00:23.018 ***** 2026-01-07 00:28:18.341684 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341690 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341697 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341704 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341710 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341718 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341724 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341731 | orchestrator | 2026-01-07 00:28:18.341738 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:28:18.341745 | orchestrator | Wednesday 07 January 2026 00:28:15 +0000 (0:00:00.575) 0:00:23.594 ***** 2026-01-07 00:28:18.341752 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:18.341758 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:18.341766 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:18.341772 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:18.341779 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:18.341786 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:18.341793 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:18.341800 | orchestrator | 2026-01-07 00:28:18.341807 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:28:18.341813 | orchestrator | Wednesday 07 January 2026 00:28:15 +0000 (0:00:00.229) 0:00:23.823 ***** 2026-01-07 00:28:18.341821 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341838 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341844 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341851 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341858 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:18.341864 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:18.341871 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:18.341877 | orchestrator | 2026-01-07 00:28:18.341884 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:28:18.341891 | orchestrator | Wednesday 07 January 2026 00:28:16 +0000 (0:00:01.082) 0:00:24.906 ***** 2026-01-07 00:28:18.341897 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341904 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341912 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341918 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341925 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:18.341932 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:18.341939 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:18.341946 | orchestrator | 2026-01-07 00:28:18.341953 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:28:18.341960 | orchestrator | Wednesday 07 January 2026 00:28:17 +0000 (0:00:00.581) 0:00:25.488 ***** 2026-01-07 00:28:18.341966 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:18.341972 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:18.341979 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:18.341986 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:18.341999 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.506939 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.507072 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.507084 | orchestrator | 2026-01-07 00:29:01.507094 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:29:01.507105 | orchestrator | Wednesday 07 January 2026 00:28:18 +0000 (0:00:01.194) 0:00:26.682 ***** 2026-01-07 00:29:01.507113 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507123 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507131 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507139 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:01.507147 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.507155 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.507163 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.507171 | orchestrator | 2026-01-07 00:29:01.507179 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-07 00:29:01.507187 | orchestrator | Wednesday 07 January 2026 00:28:35 +0000 (0:00:17.443) 0:00:44.126 ***** 2026-01-07 00:29:01.507196 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.507203 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507211 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507220 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507228 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.507236 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.507243 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.507251 | orchestrator | 2026-01-07 00:29:01.507259 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-07 00:29:01.507267 | orchestrator | Wednesday 07 January 2026 00:28:36 +0000 (0:00:00.244) 0:00:44.370 ***** 2026-01-07 00:29:01.507275 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.507283 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507291 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507298 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507306 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.507314 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.507322 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.507330 | orchestrator | 2026-01-07 00:29:01.507338 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-07 00:29:01.507346 | orchestrator | Wednesday 07 January 2026 00:28:36 +0000 (0:00:00.239) 0:00:44.610 ***** 2026-01-07 00:29:01.507385 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.507400 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507415 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507429 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507443 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.507456 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.507468 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.507500 | orchestrator | 2026-01-07 00:29:01.507518 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-07 00:29:01.507531 | orchestrator | Wednesday 07 January 2026 00:28:36 +0000 (0:00:00.225) 0:00:44.835 ***** 2026-01-07 00:29:01.507549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:01.507566 | orchestrator | 2026-01-07 00:29:01.507582 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-07 00:29:01.507596 | orchestrator | Wednesday 07 January 2026 00:28:36 +0000 (0:00:00.313) 0:00:45.149 ***** 2026-01-07 00:29:01.507633 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.507644 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507654 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507663 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.507672 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.507681 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507690 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.507699 | orchestrator | 2026-01-07 00:29:01.507709 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-07 00:29:01.507718 | orchestrator | Wednesday 07 January 2026 00:28:38 +0000 (0:00:02.088) 0:00:47.237 ***** 2026-01-07 00:29:01.507728 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:01.507736 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:01.507746 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:01.507755 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:01.507763 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.507772 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.507781 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.507791 | orchestrator | 2026-01-07 00:29:01.507800 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-07 00:29:01.507810 | orchestrator | Wednesday 07 January 2026 00:28:40 +0000 (0:00:01.173) 0:00:48.411 ***** 2026-01-07 00:29:01.507819 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.507827 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.507835 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.507843 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.507851 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.507859 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.507866 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.507874 | orchestrator | 2026-01-07 00:29:01.507882 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-07 00:29:01.507890 | orchestrator | Wednesday 07 January 2026 00:28:40 +0000 (0:00:00.838) 0:00:49.249 ***** 2026-01-07 00:29:01.507920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:01.507931 | orchestrator | 2026-01-07 00:29:01.507939 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-07 00:29:01.507948 | orchestrator | Wednesday 07 January 2026 00:28:41 +0000 (0:00:00.317) 0:00:49.566 ***** 2026-01-07 00:29:01.507955 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:01.507963 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:01.507971 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:01.507979 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.507986 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:01.508003 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.508011 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.508019 | orchestrator | 2026-01-07 00:29:01.508046 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-07 00:29:01.508054 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:01.091) 0:00:50.658 ***** 2026-01-07 00:29:01.508062 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:29:01.508070 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:29:01.508078 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:29:01.508085 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:29:01.508093 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:29:01.508101 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:29:01.508109 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:29:01.508116 | orchestrator | 2026-01-07 00:29:01.508124 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-07 00:29:01.508132 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:00.260) 0:00:50.918 ***** 2026-01-07 00:29:01.508140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:01.508148 | orchestrator | 2026-01-07 00:29:01.508156 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-07 00:29:01.508164 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:00.381) 0:00:51.300 ***** 2026-01-07 00:29:01.508172 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.508180 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.508188 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.508195 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.508203 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.508211 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.508218 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.508226 | orchestrator | 2026-01-07 00:29:01.508234 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-07 00:29:01.508242 | orchestrator | Wednesday 07 January 2026 00:28:44 +0000 (0:00:01.874) 0:00:53.174 ***** 2026-01-07 00:29:01.508250 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:01.508258 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:01.508265 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:01.508273 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.508281 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:01.508288 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.508296 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.508304 | orchestrator | 2026-01-07 00:29:01.508312 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-07 00:29:01.508320 | orchestrator | Wednesday 07 January 2026 00:28:45 +0000 (0:00:01.165) 0:00:54.340 ***** 2026-01-07 00:29:01.508327 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:01.508335 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:01.508343 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:01.508351 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:01.508358 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:01.508366 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:01.508374 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:01.508382 | orchestrator | 2026-01-07 00:29:01.508389 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-07 00:29:01.508397 | orchestrator | Wednesday 07 January 2026 00:28:58 +0000 (0:00:12.737) 0:01:07.077 ***** 2026-01-07 00:29:01.508405 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.508413 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.508421 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.508429 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.508437 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.508444 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.508458 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.508466 | orchestrator | 2026-01-07 00:29:01.508473 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-07 00:29:01.508481 | orchestrator | Wednesday 07 January 2026 00:28:59 +0000 (0:00:00.943) 0:01:08.021 ***** 2026-01-07 00:29:01.508489 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.508497 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.508505 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.508513 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.508520 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.508528 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.508535 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.508543 | orchestrator | 2026-01-07 00:29:01.508551 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-07 00:29:01.508559 | orchestrator | Wednesday 07 January 2026 00:29:00 +0000 (0:00:01.034) 0:01:09.055 ***** 2026-01-07 00:29:01.508567 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.508575 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.508585 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.508598 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.508629 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.508643 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.508662 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.508674 | orchestrator | 2026-01-07 00:29:01.508687 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-07 00:29:01.508701 | orchestrator | Wednesday 07 January 2026 00:29:00 +0000 (0:00:00.243) 0:01:09.299 ***** 2026-01-07 00:29:01.508714 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:01.508725 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:01.508733 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:01.508740 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:01.508748 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:01.508756 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:01.508763 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:01.508771 | orchestrator | 2026-01-07 00:29:01.508779 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-07 00:29:01.508787 | orchestrator | Wednesday 07 January 2026 00:29:01 +0000 (0:00:00.229) 0:01:09.529 ***** 2026-01-07 00:29:01.508795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:01.508803 | orchestrator | 2026-01-07 00:29:01.508818 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-07 00:31:21.480310 | orchestrator | Wednesday 07 January 2026 00:29:01 +0000 (0:00:00.324) 0:01:09.854 ***** 2026-01-07 00:31:21.480464 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.480486 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.480498 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.480510 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.480521 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.480533 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.480544 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.480555 | orchestrator | 2026-01-07 00:31:21.480567 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-07 00:31:21.480619 | orchestrator | Wednesday 07 January 2026 00:29:03 +0000 (0:00:02.101) 0:01:11.956 ***** 2026-01-07 00:31:21.480636 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:21.480648 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:21.480658 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:21.480669 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:21.480680 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:21.480690 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:21.480701 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:21.480712 | orchestrator | 2026-01-07 00:31:21.480745 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-07 00:31:21.480757 | orchestrator | Wednesday 07 January 2026 00:29:04 +0000 (0:00:00.629) 0:01:12.585 ***** 2026-01-07 00:31:21.480771 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.480790 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.480808 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.480826 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.480845 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.480863 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.480884 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.480903 | orchestrator | 2026-01-07 00:31:21.480921 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-07 00:31:21.480940 | orchestrator | Wednesday 07 January 2026 00:29:04 +0000 (0:00:00.235) 0:01:12.821 ***** 2026-01-07 00:31:21.480958 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.480975 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.480994 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.481014 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.481033 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.481044 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.481055 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.481066 | orchestrator | 2026-01-07 00:31:21.481077 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-07 00:31:21.481087 | orchestrator | Wednesday 07 January 2026 00:29:06 +0000 (0:00:01.575) 0:01:14.397 ***** 2026-01-07 00:31:21.481098 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:21.481109 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:21.481119 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:21.481130 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:21.481141 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:21.481151 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:21.481162 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:21.481172 | orchestrator | 2026-01-07 00:31:21.481183 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-07 00:31:21.481194 | orchestrator | Wednesday 07 January 2026 00:29:08 +0000 (0:00:02.432) 0:01:16.830 ***** 2026-01-07 00:31:21.481205 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.481216 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.481226 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.481237 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.481248 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.481258 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.481269 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.481280 | orchestrator | 2026-01-07 00:31:21.481291 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-07 00:31:21.481302 | orchestrator | Wednesday 07 January 2026 00:29:11 +0000 (0:00:02.871) 0:01:19.701 ***** 2026-01-07 00:31:21.481312 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.481323 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.481334 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.481344 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.481355 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.481365 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.481376 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.481386 | orchestrator | 2026-01-07 00:31:21.481397 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-07 00:31:21.481408 | orchestrator | Wednesday 07 January 2026 00:29:51 +0000 (0:00:39.853) 0:01:59.555 ***** 2026-01-07 00:31:21.481418 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:21.481429 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:21.481440 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:21.481450 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:21.481461 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:21.481471 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:21.481482 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:21.481502 | orchestrator | 2026-01-07 00:31:21.481523 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-07 00:31:21.481534 | orchestrator | Wednesday 07 January 2026 00:31:05 +0000 (0:01:14.329) 0:03:13.885 ***** 2026-01-07 00:31:21.481545 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:21.481555 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.481566 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.481577 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.481616 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.481627 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.481638 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.481648 | orchestrator | 2026-01-07 00:31:21.481659 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-07 00:31:21.481670 | orchestrator | Wednesday 07 January 2026 00:31:07 +0000 (0:00:01.865) 0:03:15.751 ***** 2026-01-07 00:31:21.481681 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:21.481692 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:21.481702 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:21.481713 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:21.481723 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:21.481734 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:21.481745 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:21.481755 | orchestrator | 2026-01-07 00:31:21.481766 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-07 00:31:21.481777 | orchestrator | Wednesday 07 January 2026 00:31:20 +0000 (0:00:12.856) 0:03:28.607 ***** 2026-01-07 00:31:21.481818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-07 00:31:21.481837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-07 00:31:21.481853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-07 00:31:21.481866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:31:21.481877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:31:21.481888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-07 00:31:21.481907 | orchestrator | 2026-01-07 00:31:21.481918 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-07 00:31:21.481929 | orchestrator | Wednesday 07 January 2026 00:31:20 +0000 (0:00:00.436) 0:03:29.044 ***** 2026-01-07 00:31:21.481944 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:21.481956 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:21.481967 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:21.481982 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:21.482001 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:21.482118 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:21.482143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:21.482161 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:21.482178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:21.482196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:21.482213 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:21.482232 | orchestrator | 2026-01-07 00:31:21.482250 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-07 00:31:21.482269 | orchestrator | Wednesday 07 January 2026 00:31:21 +0000 (0:00:00.715) 0:03:29.759 ***** 2026-01-07 00:31:21.482289 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:21.482302 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:21.482313 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:21.482325 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:21.482335 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:21.482359 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:29.675676 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:29.675774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:29.675786 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:29.675796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:29.675827 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:29.675840 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:29.675852 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:29.675864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:29.675874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:29.675881 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:29.675888 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:29.675896 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:29.675903 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:29.675927 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:29.675934 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:29.675940 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:29.675947 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:29.675954 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:29.675961 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:29.675968 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:29.675975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:29.675982 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:29.675989 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:29.675996 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:29.676002 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:29.676009 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:29.676015 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:29.676022 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:29.676029 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:29.676035 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:29.676042 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:29.676048 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:29.676060 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:29.676071 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:29.676082 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:29.676093 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:29.676103 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:29.676114 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:29.676125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:29.676136 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:29.676145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:29.676152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:29.676159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:29.676182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:29.676192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:29.676200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:29.676227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:29.676239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676272 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:29.676294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:29.676303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:29.676322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:29.676332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:29.676341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:29.676351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:29.676361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:29.676372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:29.676383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:29.676393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:29.676404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:29.676415 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:29.676425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:29.676438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:29.676448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:29.676460 | orchestrator | 2026-01-07 00:31:29.676471 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-07 00:31:29.676482 | orchestrator | Wednesday 07 January 2026 00:31:27 +0000 (0:00:05.945) 0:03:35.704 ***** 2026-01-07 00:31:29.676493 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676504 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676526 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:29.676601 | orchestrator | 2026-01-07 00:31:29.676620 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-07 00:31:29.676631 | orchestrator | Wednesday 07 January 2026 00:31:29 +0000 (0:00:01.717) 0:03:37.422 ***** 2026-01-07 00:31:29.676643 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:29.676662 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:29.676674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:29.676685 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:29.676696 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:29.676707 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:29.676719 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:29.676730 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:29.676742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:29.676754 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:29.676775 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:43.972822 | orchestrator | 2026-01-07 00:31:43.972941 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-07 00:31:43.972958 | orchestrator | Wednesday 07 January 2026 00:31:29 +0000 (0:00:00.594) 0:03:38.016 ***** 2026-01-07 00:31:43.972970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:43.972983 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:43.972995 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:43.973007 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:43.973018 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:43.973029 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:43.973040 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:43.973051 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:43.973062 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:43.973073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:43.973084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:43.973094 | orchestrator | 2026-01-07 00:31:43.973105 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-07 00:31:43.973116 | orchestrator | Wednesday 07 January 2026 00:31:30 +0000 (0:00:00.617) 0:03:38.633 ***** 2026-01-07 00:31:43.973128 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:43.973139 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:43.973150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:43.973160 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:43.973171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:43.973182 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:43.973193 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:43.973204 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:43.973215 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:43.973226 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:43.973237 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:43.973272 | orchestrator | 2026-01-07 00:31:43.973283 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-07 00:31:43.973294 | orchestrator | Wednesday 07 January 2026 00:31:31 +0000 (0:00:01.604) 0:03:40.238 ***** 2026-01-07 00:31:43.973305 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:43.973316 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:43.973326 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:43.973338 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:43.973351 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:43.973363 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:43.973376 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:43.973389 | orchestrator | 2026-01-07 00:31:43.973401 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-07 00:31:43.973414 | orchestrator | Wednesday 07 January 2026 00:31:32 +0000 (0:00:00.303) 0:03:40.542 ***** 2026-01-07 00:31:43.973426 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:43.973440 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:43.973452 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:43.973465 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:43.973477 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:43.973489 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:43.973501 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:43.973513 | orchestrator | 2026-01-07 00:31:43.973541 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-07 00:31:43.973629 | orchestrator | Wednesday 07 January 2026 00:31:37 +0000 (0:00:05.579) 0:03:46.122 ***** 2026-01-07 00:31:43.973643 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-07 00:31:43.973657 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-07 00:31:43.973670 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:43.973683 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-07 00:31:43.973695 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:43.973706 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-07 00:31:43.973717 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:43.973728 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-07 00:31:43.973739 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:43.973750 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-07 00:31:43.973761 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:43.973771 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:43.973782 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-07 00:31:43.973793 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:43.973804 | orchestrator | 2026-01-07 00:31:43.973815 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-07 00:31:43.973826 | orchestrator | Wednesday 07 January 2026 00:31:38 +0000 (0:00:00.307) 0:03:46.429 ***** 2026-01-07 00:31:43.973837 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-07 00:31:43.973848 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-07 00:31:43.973859 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-07 00:31:43.973887 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-07 00:31:43.973899 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-07 00:31:43.973910 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-07 00:31:43.973921 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-07 00:31:43.973932 | orchestrator | 2026-01-07 00:31:43.973943 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-07 00:31:43.973954 | orchestrator | Wednesday 07 January 2026 00:31:39 +0000 (0:00:01.084) 0:03:47.514 ***** 2026-01-07 00:31:43.973968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:43.973981 | orchestrator | 2026-01-07 00:31:43.974001 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-07 00:31:43.974012 | orchestrator | Wednesday 07 January 2026 00:31:39 +0000 (0:00:00.587) 0:03:48.101 ***** 2026-01-07 00:31:43.974084 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:43.974096 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:43.974107 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:43.974118 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:43.974129 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:43.974140 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:43.974151 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:43.974161 | orchestrator | 2026-01-07 00:31:43.974172 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-07 00:31:43.974184 | orchestrator | Wednesday 07 January 2026 00:31:41 +0000 (0:00:01.339) 0:03:49.441 ***** 2026-01-07 00:31:43.974195 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:43.974205 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:43.974216 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:43.974227 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:43.974238 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:43.974248 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:43.974259 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:43.974270 | orchestrator | 2026-01-07 00:31:43.974281 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-07 00:31:43.974292 | orchestrator | Wednesday 07 January 2026 00:31:41 +0000 (0:00:00.610) 0:03:50.052 ***** 2026-01-07 00:31:43.974303 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:43.974314 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:43.974325 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:43.974336 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:43.974347 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:43.974358 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:43.974368 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:43.974379 | orchestrator | 2026-01-07 00:31:43.974390 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-07 00:31:43.974401 | orchestrator | Wednesday 07 January 2026 00:31:42 +0000 (0:00:00.590) 0:03:50.643 ***** 2026-01-07 00:31:43.974412 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:43.974423 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:43.974434 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:43.974445 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:43.974456 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:43.974466 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:43.974477 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:43.974488 | orchestrator | 2026-01-07 00:31:43.974499 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-07 00:31:43.974510 | orchestrator | Wednesday 07 January 2026 00:31:42 +0000 (0:00:00.604) 0:03:51.247 ***** 2026-01-07 00:31:43.974528 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744342.6693778, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:43.974572 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744342.0773358, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:43.974594 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744330.4789762, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:43.974630 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744364.0373363, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038675 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744346.9200854, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038823 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744336.0779402, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038849 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744345.7258565, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038870 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038914 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038970 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.038992 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.039053 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.039078 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.039097 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:49.039115 | orchestrator | 2026-01-07 00:31:49.039129 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-07 00:31:49.039143 | orchestrator | Wednesday 07 January 2026 00:31:43 +0000 (0:00:01.068) 0:03:52.316 ***** 2026-01-07 00:31:49.039154 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:49.039167 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.039178 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.039191 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.039210 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.039227 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.039246 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.039264 | orchestrator | 2026-01-07 00:31:49.039281 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-07 00:31:49.039298 | orchestrator | Wednesday 07 January 2026 00:31:45 +0000 (0:00:01.186) 0:03:53.502 ***** 2026-01-07 00:31:49.039315 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:49.039334 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.039371 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.039390 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.039408 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.039427 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.039438 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.039449 | orchestrator | 2026-01-07 00:31:49.039461 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-07 00:31:49.039480 | orchestrator | Wednesday 07 January 2026 00:31:46 +0000 (0:00:01.217) 0:03:54.719 ***** 2026-01-07 00:31:49.039510 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:49.039536 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.039583 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.039601 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.039620 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.039637 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.039655 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.039673 | orchestrator | 2026-01-07 00:31:49.039691 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-07 00:31:49.039708 | orchestrator | Wednesday 07 January 2026 00:31:47 +0000 (0:00:01.210) 0:03:55.929 ***** 2026-01-07 00:31:49.039726 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.039745 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.039763 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:49.039782 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:49.039799 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:49.039819 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:49.039837 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:49.039853 | orchestrator | 2026-01-07 00:31:49.039870 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-07 00:31:49.039888 | orchestrator | Wednesday 07 January 2026 00:31:47 +0000 (0:00:00.265) 0:03:56.195 ***** 2026-01-07 00:31:49.039905 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.039924 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:49.039942 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:49.039960 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:49.039978 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.039996 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.040013 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.040031 | orchestrator | 2026-01-07 00:31:49.040050 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-07 00:31:49.040067 | orchestrator | Wednesday 07 January 2026 00:31:48 +0000 (0:00:00.760) 0:03:56.955 ***** 2026-01-07 00:31:49.040087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:49.040109 | orchestrator | 2026-01-07 00:31:49.040127 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-07 00:31:49.040162 | orchestrator | Wednesday 07 January 2026 00:31:49 +0000 (0:00:00.430) 0:03:57.386 ***** 2026-01-07 00:33:05.716541 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.716663 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:05.716680 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:05.716691 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:05.716703 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:05.716714 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:05.716725 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:05.716736 | orchestrator | 2026-01-07 00:33:05.716748 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-07 00:33:05.716761 | orchestrator | Wednesday 07 January 2026 00:31:57 +0000 (0:00:08.378) 0:04:05.764 ***** 2026-01-07 00:33:05.716772 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.716785 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.716796 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.716830 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.716842 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.716853 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.716864 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.716874 | orchestrator | 2026-01-07 00:33:05.716885 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-07 00:33:05.716896 | orchestrator | Wednesday 07 January 2026 00:31:58 +0000 (0:00:01.380) 0:04:07.145 ***** 2026-01-07 00:33:05.716908 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.716918 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.716929 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.716940 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.716951 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.716961 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.716972 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.716983 | orchestrator | 2026-01-07 00:33:05.716994 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-07 00:33:05.717005 | orchestrator | Wednesday 07 January 2026 00:31:59 +0000 (0:00:01.153) 0:04:08.299 ***** 2026-01-07 00:33:05.717016 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.717027 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.717040 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.717053 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.717066 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.717078 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.717090 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.717104 | orchestrator | 2026-01-07 00:33:05.717117 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-07 00:33:05.717131 | orchestrator | Wednesday 07 January 2026 00:32:00 +0000 (0:00:00.326) 0:04:08.626 ***** 2026-01-07 00:33:05.717144 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.717156 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.717169 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.717182 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.717194 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.717206 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.717219 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.717232 | orchestrator | 2026-01-07 00:33:05.717245 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-07 00:33:05.717258 | orchestrator | Wednesday 07 January 2026 00:32:00 +0000 (0:00:00.331) 0:04:08.957 ***** 2026-01-07 00:33:05.717271 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.717283 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.717295 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.717308 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.717321 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.717334 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.717346 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.717360 | orchestrator | 2026-01-07 00:33:05.717373 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-07 00:33:05.717386 | orchestrator | Wednesday 07 January 2026 00:32:00 +0000 (0:00:00.306) 0:04:09.264 ***** 2026-01-07 00:33:05.717397 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.717408 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.717419 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.717473 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.717486 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.717497 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.717508 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.717519 | orchestrator | 2026-01-07 00:33:05.717530 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-07 00:33:05.717542 | orchestrator | Wednesday 07 January 2026 00:32:05 +0000 (0:00:05.069) 0:04:14.333 ***** 2026-01-07 00:33:05.717555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:05.717576 | orchestrator | 2026-01-07 00:33:05.717588 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-07 00:33:05.717599 | orchestrator | Wednesday 07 January 2026 00:32:06 +0000 (0:00:00.416) 0:04:14.750 ***** 2026-01-07 00:33:05.717610 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717621 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-07 00:33:05.717632 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:05.717644 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717654 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-07 00:33:05.717666 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:05.717677 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717688 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-07 00:33:05.717699 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717710 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-07 00:33:05.717721 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:05.717731 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:05.717742 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717753 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-07 00:33:05.717764 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717775 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-07 00:33:05.717804 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:05.717816 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:05.717827 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-07 00:33:05.717838 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-07 00:33:05.717849 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:05.717861 | orchestrator | 2026-01-07 00:33:05.717872 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-07 00:33:05.717883 | orchestrator | Wednesday 07 January 2026 00:32:06 +0000 (0:00:00.376) 0:04:15.126 ***** 2026-01-07 00:33:05.717895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:05.717906 | orchestrator | 2026-01-07 00:33:05.717917 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-07 00:33:05.717928 | orchestrator | Wednesday 07 January 2026 00:32:07 +0000 (0:00:00.420) 0:04:15.546 ***** 2026-01-07 00:33:05.717940 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-07 00:33:05.717951 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-07 00:33:05.717962 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:05.717974 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-07 00:33:05.717985 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:05.717996 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-07 00:33:05.718007 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:05.718079 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:05.718093 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-07 00:33:05.718104 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-07 00:33:05.718115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:05.718126 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:05.718137 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-07 00:33:05.718148 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:05.718159 | orchestrator | 2026-01-07 00:33:05.718177 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-07 00:33:05.718189 | orchestrator | Wednesday 07 January 2026 00:32:07 +0000 (0:00:00.307) 0:04:15.854 ***** 2026-01-07 00:33:05.718200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:05.718211 | orchestrator | 2026-01-07 00:33:05.718222 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-07 00:33:05.718233 | orchestrator | Wednesday 07 January 2026 00:32:07 +0000 (0:00:00.459) 0:04:16.314 ***** 2026-01-07 00:33:05.718244 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:05.718254 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:05.718265 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:05.718276 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:05.718287 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:05.718298 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:05.718309 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:05.718320 | orchestrator | 2026-01-07 00:33:05.718331 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-07 00:33:05.718342 | orchestrator | Wednesday 07 January 2026 00:32:41 +0000 (0:00:33.895) 0:04:50.209 ***** 2026-01-07 00:33:05.718353 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:05.718364 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:05.718375 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:05.718386 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:05.718397 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:05.718408 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:05.718418 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:05.718429 | orchestrator | 2026-01-07 00:33:05.718440 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-07 00:33:05.718470 | orchestrator | Wednesday 07 January 2026 00:32:50 +0000 (0:00:08.297) 0:04:58.507 ***** 2026-01-07 00:33:05.718482 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:05.718492 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:05.718503 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:05.718514 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:05.718525 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:05.718535 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:05.718546 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:05.718557 | orchestrator | 2026-01-07 00:33:05.718568 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-07 00:33:05.718589 | orchestrator | Wednesday 07 January 2026 00:32:57 +0000 (0:00:07.783) 0:05:06.290 ***** 2026-01-07 00:33:05.718600 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:05.718611 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:05.718622 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:05.718633 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:05.718644 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:05.718655 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:05.718665 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:05.718676 | orchestrator | 2026-01-07 00:33:05.718687 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-07 00:33:05.718698 | orchestrator | Wednesday 07 January 2026 00:32:59 +0000 (0:00:01.796) 0:05:08.087 ***** 2026-01-07 00:33:05.718709 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:05.718720 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:05.718731 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:05.718741 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:05.718752 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:05.718763 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:05.718774 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:05.718785 | orchestrator | 2026-01-07 00:33:05.718804 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-07 00:33:17.473570 | orchestrator | Wednesday 07 January 2026 00:33:05 +0000 (0:00:05.970) 0:05:14.057 ***** 2026-01-07 00:33:17.473696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:17.473716 | orchestrator | 2026-01-07 00:33:17.473727 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-07 00:33:17.473738 | orchestrator | Wednesday 07 January 2026 00:33:06 +0000 (0:00:00.604) 0:05:14.661 ***** 2026-01-07 00:33:17.473748 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:17.473758 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:17.473769 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:17.473779 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:17.473788 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:17.473798 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:17.473808 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:17.473817 | orchestrator | 2026-01-07 00:33:17.473827 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-07 00:33:17.473837 | orchestrator | Wednesday 07 January 2026 00:33:07 +0000 (0:00:00.751) 0:05:15.412 ***** 2026-01-07 00:33:17.473846 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:17.473857 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:17.473867 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:17.473876 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:17.473886 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:17.473895 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:17.473905 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:17.473914 | orchestrator | 2026-01-07 00:33:17.473924 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-07 00:33:17.473953 | orchestrator | Wednesday 07 January 2026 00:33:08 +0000 (0:00:01.737) 0:05:17.150 ***** 2026-01-07 00:33:17.473964 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:17.473974 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:17.473983 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:17.473993 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:17.474003 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:17.474012 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:17.474081 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:17.474093 | orchestrator | 2026-01-07 00:33:17.474105 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-07 00:33:17.474117 | orchestrator | Wednesday 07 January 2026 00:33:09 +0000 (0:00:00.843) 0:05:17.993 ***** 2026-01-07 00:33:17.474128 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.474139 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.474150 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.474161 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:17.474173 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:17.474184 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:17.474195 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:17.474207 | orchestrator | 2026-01-07 00:33:17.474218 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-07 00:33:17.474230 | orchestrator | Wednesday 07 January 2026 00:33:09 +0000 (0:00:00.286) 0:05:18.279 ***** 2026-01-07 00:33:17.474241 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.474252 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.474263 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.474274 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:17.474286 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:17.474297 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:17.474309 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:17.474319 | orchestrator | 2026-01-07 00:33:17.474347 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-07 00:33:17.474382 | orchestrator | Wednesday 07 January 2026 00:33:10 +0000 (0:00:00.400) 0:05:18.679 ***** 2026-01-07 00:33:17.474394 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:17.474405 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:17.474416 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:17.474445 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:17.474456 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:17.474465 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:17.474475 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:17.474484 | orchestrator | 2026-01-07 00:33:17.474494 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-07 00:33:17.474504 | orchestrator | Wednesday 07 January 2026 00:33:10 +0000 (0:00:00.309) 0:05:18.989 ***** 2026-01-07 00:33:17.474514 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.474523 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.474533 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.474542 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:17.474552 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:17.474561 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:17.474571 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:17.474580 | orchestrator | 2026-01-07 00:33:17.474590 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-07 00:33:17.474601 | orchestrator | Wednesday 07 January 2026 00:33:10 +0000 (0:00:00.343) 0:05:19.332 ***** 2026-01-07 00:33:17.474611 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:17.474621 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:17.474630 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:17.474640 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:17.474650 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:17.474659 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:17.474669 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:17.474678 | orchestrator | 2026-01-07 00:33:17.474688 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-07 00:33:17.474698 | orchestrator | Wednesday 07 January 2026 00:33:11 +0000 (0:00:00.322) 0:05:19.655 ***** 2026-01-07 00:33:17.474707 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:33:17.474717 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474726 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:33:17.474736 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474747 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:33:17.474764 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474780 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:33:17.474797 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474836 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:33:17.474852 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474863 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:33:17.474872 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474882 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:33:17.474891 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:17.474901 | orchestrator | 2026-01-07 00:33:17.474911 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-07 00:33:17.474920 | orchestrator | Wednesday 07 January 2026 00:33:11 +0000 (0:00:00.310) 0:05:19.965 ***** 2026-01-07 00:33:17.474930 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:33:17.474939 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.474949 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:33:17.474958 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.474968 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:33:17.474978 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.474989 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:33:17.475000 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.475010 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:33:17.475021 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.475032 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:33:17.475052 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.475063 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:33:17.475073 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:17.475084 | orchestrator | 2026-01-07 00:33:17.475095 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-07 00:33:17.475124 | orchestrator | Wednesday 07 January 2026 00:33:11 +0000 (0:00:00.298) 0:05:20.264 ***** 2026-01-07 00:33:17.475135 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.475145 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.475156 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.475166 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:17.475177 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:17.475187 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:17.475198 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:17.475208 | orchestrator | 2026-01-07 00:33:17.475219 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-07 00:33:17.475230 | orchestrator | Wednesday 07 January 2026 00:33:12 +0000 (0:00:00.283) 0:05:20.548 ***** 2026-01-07 00:33:17.475241 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.475252 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.475262 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.475272 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:17.475283 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:17.475293 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:17.475304 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:17.475314 | orchestrator | 2026-01-07 00:33:17.475325 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-07 00:33:17.475336 | orchestrator | Wednesday 07 January 2026 00:33:12 +0000 (0:00:00.330) 0:05:20.878 ***** 2026-01-07 00:33:17.475349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:17.475365 | orchestrator | 2026-01-07 00:33:17.475383 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-07 00:33:17.475402 | orchestrator | Wednesday 07 January 2026 00:33:12 +0000 (0:00:00.442) 0:05:21.321 ***** 2026-01-07 00:33:17.475420 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:17.475485 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:17.475507 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:17.475518 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:17.475529 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:17.475539 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:17.475550 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:17.475561 | orchestrator | 2026-01-07 00:33:17.475572 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-07 00:33:17.475582 | orchestrator | Wednesday 07 January 2026 00:33:14 +0000 (0:00:01.179) 0:05:22.501 ***** 2026-01-07 00:33:17.475593 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:17.475604 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:17.475614 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:17.475625 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:17.475635 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:17.475646 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:17.475656 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:17.475667 | orchestrator | 2026-01-07 00:33:17.475678 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-07 00:33:17.475690 | orchestrator | Wednesday 07 January 2026 00:33:17 +0000 (0:00:02.940) 0:05:25.442 ***** 2026-01-07 00:33:17.475701 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-07 00:33:17.475713 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-07 00:33:17.475723 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-07 00:33:17.475742 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-07 00:33:17.475754 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-07 00:33:17.475765 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-07 00:33:17.475775 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:17.475786 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-07 00:33:17.475797 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-07 00:33:17.475807 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-07 00:33:17.475818 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:17.475829 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-07 00:33:17.475839 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-07 00:33:17.475850 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-07 00:33:17.475861 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:17.475872 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-07 00:33:17.475891 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-07 00:34:20.505596 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-07 00:34:20.505716 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:20.505732 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-07 00:34:20.505745 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-07 00:34:20.505756 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-07 00:34:20.505767 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:20.505778 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:20.505789 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-07 00:34:20.505800 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-07 00:34:20.505811 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-07 00:34:20.505822 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:20.505833 | orchestrator | 2026-01-07 00:34:20.505846 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-07 00:34:20.505858 | orchestrator | Wednesday 07 January 2026 00:33:17 +0000 (0:00:00.608) 0:05:26.050 ***** 2026-01-07 00:34:20.505869 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.505881 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.505892 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.505903 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.505914 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.505924 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.505935 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.505946 | orchestrator | 2026-01-07 00:34:20.505957 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-07 00:34:20.505968 | orchestrator | Wednesday 07 January 2026 00:33:24 +0000 (0:00:06.964) 0:05:33.014 ***** 2026-01-07 00:34:20.505979 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.505989 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.506000 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506011 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506086 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506098 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506109 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506119 | orchestrator | 2026-01-07 00:34:20.506132 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-07 00:34:20.506145 | orchestrator | Wednesday 07 January 2026 00:33:25 +0000 (0:00:01.110) 0:05:34.125 ***** 2026-01-07 00:34:20.506158 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.506171 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506183 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506196 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506209 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506255 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506276 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506295 | orchestrator | 2026-01-07 00:34:20.506336 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-07 00:34:20.506349 | orchestrator | Wednesday 07 January 2026 00:33:33 +0000 (0:00:08.137) 0:05:42.263 ***** 2026-01-07 00:34:20.506362 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:20.506375 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506388 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506401 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506415 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506428 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506440 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506452 | orchestrator | 2026-01-07 00:34:20.506466 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-07 00:34:20.506479 | orchestrator | Wednesday 07 January 2026 00:33:37 +0000 (0:00:03.344) 0:05:45.607 ***** 2026-01-07 00:34:20.506506 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.506517 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506528 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506538 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506549 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506560 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506570 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506581 | orchestrator | 2026-01-07 00:34:20.506592 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-07 00:34:20.506603 | orchestrator | Wednesday 07 January 2026 00:33:38 +0000 (0:00:01.413) 0:05:47.021 ***** 2026-01-07 00:34:20.506614 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.506624 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506635 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506645 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506656 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506667 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506677 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506688 | orchestrator | 2026-01-07 00:34:20.506699 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-07 00:34:20.506710 | orchestrator | Wednesday 07 January 2026 00:33:40 +0000 (0:00:01.546) 0:05:48.567 ***** 2026-01-07 00:34:20.506721 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:20.506731 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:20.506742 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:20.506753 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:20.506763 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:20.506775 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:20.506786 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:20.506796 | orchestrator | 2026-01-07 00:34:20.506808 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-07 00:34:20.506819 | orchestrator | Wednesday 07 January 2026 00:33:40 +0000 (0:00:00.633) 0:05:49.201 ***** 2026-01-07 00:34:20.506829 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.506840 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506851 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.506862 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506872 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.506883 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.506894 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.506904 | orchestrator | 2026-01-07 00:34:20.506915 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-07 00:34:20.506945 | orchestrator | Wednesday 07 January 2026 00:33:51 +0000 (0:00:10.633) 0:05:59.835 ***** 2026-01-07 00:34:20.506957 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:20.506968 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.506987 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.506998 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.507009 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.507020 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.507030 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.507041 | orchestrator | 2026-01-07 00:34:20.507052 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-07 00:34:20.507063 | orchestrator | Wednesday 07 January 2026 00:33:52 +0000 (0:00:00.947) 0:06:00.782 ***** 2026-01-07 00:34:20.507073 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.507084 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.507095 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.507106 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.507116 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.507127 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.507138 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.507148 | orchestrator | 2026-01-07 00:34:20.507159 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-07 00:34:20.507177 | orchestrator | Wednesday 07 January 2026 00:34:01 +0000 (0:00:09.523) 0:06:10.306 ***** 2026-01-07 00:34:20.507197 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.507215 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.507234 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.507252 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.507270 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.507288 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.507344 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.507365 | orchestrator | 2026-01-07 00:34:20.507385 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-07 00:34:20.507404 | orchestrator | Wednesday 07 January 2026 00:34:13 +0000 (0:00:11.714) 0:06:22.020 ***** 2026-01-07 00:34:20.507422 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-07 00:34:20.507433 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-07 00:34:20.507444 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-07 00:34:20.507455 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-07 00:34:20.507466 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-07 00:34:20.507476 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-07 00:34:20.507487 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-07 00:34:20.507498 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-07 00:34:20.507509 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-07 00:34:20.507520 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-07 00:34:20.507530 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-07 00:34:20.507541 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-07 00:34:20.507552 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-07 00:34:20.507563 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-07 00:34:20.507574 | orchestrator | 2026-01-07 00:34:20.507584 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-07 00:34:20.507595 | orchestrator | Wednesday 07 January 2026 00:34:14 +0000 (0:00:01.214) 0:06:23.235 ***** 2026-01-07 00:34:20.507606 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:20.507617 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:20.507628 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:20.507638 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:20.507649 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:20.507660 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:20.507671 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:20.507681 | orchestrator | 2026-01-07 00:34:20.507692 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-07 00:34:20.507703 | orchestrator | Wednesday 07 January 2026 00:34:15 +0000 (0:00:00.572) 0:06:23.808 ***** 2026-01-07 00:34:20.507724 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:20.507735 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:20.507746 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:20.507756 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:20.507767 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:20.507777 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:20.507788 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:20.507798 | orchestrator | 2026-01-07 00:34:20.507809 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-07 00:34:20.507822 | orchestrator | Wednesday 07 January 2026 00:34:19 +0000 (0:00:04.051) 0:06:27.859 ***** 2026-01-07 00:34:20.507832 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:20.507843 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:20.507854 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:20.507864 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:20.507875 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:20.507886 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:20.507896 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:20.507907 | orchestrator | 2026-01-07 00:34:20.507918 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-07 00:34:20.507930 | orchestrator | Wednesday 07 January 2026 00:34:19 +0000 (0:00:00.495) 0:06:28.355 ***** 2026-01-07 00:34:20.507940 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-07 00:34:20.507951 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-07 00:34:20.507962 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:20.507973 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-07 00:34:20.507984 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-07 00:34:20.507994 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:20.508005 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-07 00:34:20.508016 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-07 00:34:20.508027 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:20.508047 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-07 00:34:40.415669 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-07 00:34:40.415786 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:40.415795 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-07 00:34:40.415800 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-07 00:34:40.415805 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:40.415810 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-07 00:34:40.415816 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-07 00:34:40.415820 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:40.415825 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-07 00:34:40.415830 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-07 00:34:40.415835 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:40.415839 | orchestrator | 2026-01-07 00:34:40.415845 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-07 00:34:40.415852 | orchestrator | Wednesday 07 January 2026 00:34:20 +0000 (0:00:00.764) 0:06:29.120 ***** 2026-01-07 00:34:40.415856 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:40.415861 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:40.415865 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:40.415870 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:40.415874 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:40.415879 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:40.415883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:40.415888 | orchestrator | 2026-01-07 00:34:40.415893 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-07 00:34:40.415917 | orchestrator | Wednesday 07 January 2026 00:34:21 +0000 (0:00:00.521) 0:06:29.642 ***** 2026-01-07 00:34:40.415922 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:40.415926 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:40.415931 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:40.415935 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:40.415940 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:40.415944 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:40.415949 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:40.415953 | orchestrator | 2026-01-07 00:34:40.415958 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-07 00:34:40.415962 | orchestrator | Wednesday 07 January 2026 00:34:21 +0000 (0:00:00.513) 0:06:30.155 ***** 2026-01-07 00:34:40.415967 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:40.415972 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:40.415977 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:40.416026 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:40.416031 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:40.416035 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:40.416040 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:40.416044 | orchestrator | 2026-01-07 00:34:40.416049 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-07 00:34:40.416054 | orchestrator | Wednesday 07 January 2026 00:34:22 +0000 (0:00:00.539) 0:06:30.694 ***** 2026-01-07 00:34:40.416058 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416063 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416068 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416072 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416077 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416081 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416086 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416090 | orchestrator | 2026-01-07 00:34:40.416095 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-07 00:34:40.416103 | orchestrator | Wednesday 07 January 2026 00:34:24 +0000 (0:00:01.957) 0:06:32.652 ***** 2026-01-07 00:34:40.416108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:40.416115 | orchestrator | 2026-01-07 00:34:40.416120 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-07 00:34:40.416125 | orchestrator | Wednesday 07 January 2026 00:34:25 +0000 (0:00:00.907) 0:06:33.560 ***** 2026-01-07 00:34:40.416129 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416134 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:40.416138 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:40.416143 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:40.416147 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:40.416152 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:40.416156 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:40.416160 | orchestrator | 2026-01-07 00:34:40.416165 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-07 00:34:40.416170 | orchestrator | Wednesday 07 January 2026 00:34:26 +0000 (0:00:00.848) 0:06:34.408 ***** 2026-01-07 00:34:40.416175 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416179 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:40.416184 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:40.416188 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:40.416193 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:40.416197 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:40.416202 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:40.416208 | orchestrator | 2026-01-07 00:34:40.416213 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-07 00:34:40.416224 | orchestrator | Wednesday 07 January 2026 00:34:26 +0000 (0:00:00.900) 0:06:35.308 ***** 2026-01-07 00:34:40.416229 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416234 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:40.416240 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:40.416244 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:40.416249 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:40.416254 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:40.416277 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:40.416282 | orchestrator | 2026-01-07 00:34:40.416287 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-07 00:34:40.416305 | orchestrator | Wednesday 07 January 2026 00:34:28 +0000 (0:00:01.653) 0:06:36.961 ***** 2026-01-07 00:34:40.416311 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:40.416316 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416321 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416326 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416331 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416336 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416341 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416346 | orchestrator | 2026-01-07 00:34:40.416352 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-07 00:34:40.416357 | orchestrator | Wednesday 07 January 2026 00:34:30 +0000 (0:00:01.416) 0:06:38.378 ***** 2026-01-07 00:34:40.416362 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416367 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:40.416372 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:40.416377 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:40.416382 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:40.416387 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:40.416392 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:40.416397 | orchestrator | 2026-01-07 00:34:40.416403 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-07 00:34:40.416408 | orchestrator | Wednesday 07 January 2026 00:34:31 +0000 (0:00:01.413) 0:06:39.792 ***** 2026-01-07 00:34:40.416413 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:40.416419 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:40.416424 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:40.416429 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:40.416434 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:40.416439 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:40.416444 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:40.416449 | orchestrator | 2026-01-07 00:34:40.416454 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-07 00:34:40.416459 | orchestrator | Wednesday 07 January 2026 00:34:32 +0000 (0:00:01.490) 0:06:41.283 ***** 2026-01-07 00:34:40.416465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:40.416470 | orchestrator | 2026-01-07 00:34:40.416475 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-07 00:34:40.416480 | orchestrator | Wednesday 07 January 2026 00:34:33 +0000 (0:00:01.046) 0:06:42.329 ***** 2026-01-07 00:34:40.416485 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416490 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416495 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416501 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416506 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416511 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416516 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416521 | orchestrator | 2026-01-07 00:34:40.416526 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-07 00:34:40.416532 | orchestrator | Wednesday 07 January 2026 00:34:35 +0000 (0:00:01.356) 0:06:43.686 ***** 2026-01-07 00:34:40.416545 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416550 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416555 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416560 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416564 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416569 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416573 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416577 | orchestrator | 2026-01-07 00:34:40.416582 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-07 00:34:40.416587 | orchestrator | Wednesday 07 January 2026 00:34:36 +0000 (0:00:01.255) 0:06:44.941 ***** 2026-01-07 00:34:40.416591 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416596 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416600 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416605 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416609 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416614 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416618 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416623 | orchestrator | 2026-01-07 00:34:40.416627 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-07 00:34:40.416632 | orchestrator | Wednesday 07 January 2026 00:34:37 +0000 (0:00:01.145) 0:06:46.087 ***** 2026-01-07 00:34:40.416636 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:40.416641 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:40.416645 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:40.416650 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:40.416654 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:40.416659 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:40.416663 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:40.416667 | orchestrator | 2026-01-07 00:34:40.416672 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-07 00:34:40.416677 | orchestrator | Wednesday 07 January 2026 00:34:39 +0000 (0:00:01.444) 0:06:47.532 ***** 2026-01-07 00:34:40.416681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:40.416686 | orchestrator | 2026-01-07 00:34:40.416690 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:40.416695 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.906) 0:06:48.438 ***** 2026-01-07 00:34:40.416699 | orchestrator | 2026-01-07 00:34:40.416704 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:40.416708 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.040) 0:06:48.479 ***** 2026-01-07 00:34:40.416713 | orchestrator | 2026-01-07 00:34:40.416717 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:40.416722 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.039) 0:06:48.518 ***** 2026-01-07 00:34:40.416726 | orchestrator | 2026-01-07 00:34:40.416731 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:40.416738 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.049) 0:06:48.567 ***** 2026-01-07 00:35:06.982156 | orchestrator | 2026-01-07 00:35:06.982390 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:06.982414 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.041) 0:06:48.608 ***** 2026-01-07 00:35:06.982426 | orchestrator | 2026-01-07 00:35:06.982437 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:06.982448 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.040) 0:06:48.649 ***** 2026-01-07 00:35:06.982459 | orchestrator | 2026-01-07 00:35:06.982470 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:06.982481 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.061) 0:06:48.711 ***** 2026-01-07 00:35:06.982517 | orchestrator | 2026-01-07 00:35:06.982529 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:35:06.982540 | orchestrator | Wednesday 07 January 2026 00:34:40 +0000 (0:00:00.042) 0:06:48.753 ***** 2026-01-07 00:35:06.982550 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:06.982562 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:06.982573 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:06.982584 | orchestrator | 2026-01-07 00:35:06.982595 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-07 00:35:06.982607 | orchestrator | Wednesday 07 January 2026 00:34:41 +0000 (0:00:01.261) 0:06:50.014 ***** 2026-01-07 00:35:06.982620 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:06.982634 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:06.982647 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:06.982660 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:06.982673 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:06.982685 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:06.982698 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:06.982712 | orchestrator | 2026-01-07 00:35:06.982725 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-07 00:35:06.982738 | orchestrator | Wednesday 07 January 2026 00:34:43 +0000 (0:00:01.536) 0:06:51.550 ***** 2026-01-07 00:35:06.982751 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:06.982765 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:06.982778 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:06.982790 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:06.982828 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:06.982842 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:06.982855 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:06.982867 | orchestrator | 2026-01-07 00:35:06.982877 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-07 00:35:06.982888 | orchestrator | Wednesday 07 January 2026 00:34:44 +0000 (0:00:01.219) 0:06:52.770 ***** 2026-01-07 00:35:06.982899 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:06.982909 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:06.982920 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:06.982931 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:06.982941 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:06.982952 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:06.982963 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:06.982973 | orchestrator | 2026-01-07 00:35:06.982984 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-07 00:35:06.982995 | orchestrator | Wednesday 07 January 2026 00:34:46 +0000 (0:00:02.384) 0:06:55.155 ***** 2026-01-07 00:35:06.983006 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:06.983016 | orchestrator | 2026-01-07 00:35:06.983027 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-07 00:35:06.983038 | orchestrator | Wednesday 07 January 2026 00:34:46 +0000 (0:00:00.127) 0:06:55.282 ***** 2026-01-07 00:35:06.983065 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.983076 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:06.983087 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:06.983097 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:06.983108 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:06.983118 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:06.983129 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:06.983140 | orchestrator | 2026-01-07 00:35:06.983152 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-07 00:35:06.983163 | orchestrator | Wednesday 07 January 2026 00:34:47 +0000 (0:00:01.034) 0:06:56.317 ***** 2026-01-07 00:35:06.983174 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:06.983185 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:06.983195 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:06.983253 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:06.983265 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:06.983276 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:06.983286 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:06.983297 | orchestrator | 2026-01-07 00:35:06.983308 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-07 00:35:06.983318 | orchestrator | Wednesday 07 January 2026 00:34:48 +0000 (0:00:00.528) 0:06:56.845 ***** 2026-01-07 00:35:06.983330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:06.983343 | orchestrator | 2026-01-07 00:35:06.983354 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-07 00:35:06.983365 | orchestrator | Wednesday 07 January 2026 00:34:49 +0000 (0:00:01.092) 0:06:57.937 ***** 2026-01-07 00:35:06.983376 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.983387 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:06.983397 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:06.983408 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:06.983418 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:06.983429 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:06.983439 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:06.983450 | orchestrator | 2026-01-07 00:35:06.983461 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-07 00:35:06.983472 | orchestrator | Wednesday 07 January 2026 00:34:50 +0000 (0:00:00.880) 0:06:58.818 ***** 2026-01-07 00:35:06.983482 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-07 00:35:06.983515 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-07 00:35:06.983527 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-07 00:35:06.983538 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-07 00:35:06.983549 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-07 00:35:06.983559 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-07 00:35:06.983570 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-07 00:35:06.983581 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-07 00:35:06.983592 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-07 00:35:06.983602 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-07 00:35:06.983613 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-07 00:35:06.983624 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-07 00:35:06.983634 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-07 00:35:06.983645 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-07 00:35:06.983656 | orchestrator | 2026-01-07 00:35:06.983667 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-07 00:35:06.983678 | orchestrator | Wednesday 07 January 2026 00:34:53 +0000 (0:00:02.562) 0:07:01.380 ***** 2026-01-07 00:35:06.983689 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:06.983699 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:06.983710 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:06.983720 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:06.983731 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:06.983742 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:06.983752 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:06.983763 | orchestrator | 2026-01-07 00:35:06.983773 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-07 00:35:06.983784 | orchestrator | Wednesday 07 January 2026 00:34:53 +0000 (0:00:00.742) 0:07:02.123 ***** 2026-01-07 00:35:06.983797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:06.983819 | orchestrator | 2026-01-07 00:35:06.983830 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-07 00:35:06.983841 | orchestrator | Wednesday 07 January 2026 00:34:54 +0000 (0:00:00.857) 0:07:02.981 ***** 2026-01-07 00:35:06.983851 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.983862 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:06.983873 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:06.983883 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:06.983922 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:06.983934 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:06.983944 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:06.983955 | orchestrator | 2026-01-07 00:35:06.983966 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-07 00:35:06.983977 | orchestrator | Wednesday 07 January 2026 00:34:55 +0000 (0:00:00.866) 0:07:03.847 ***** 2026-01-07 00:35:06.983987 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.983998 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:06.984008 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:06.984019 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:06.984030 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:06.984040 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:06.984062 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:06.984081 | orchestrator | 2026-01-07 00:35:06.984093 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-07 00:35:06.984103 | orchestrator | Wednesday 07 January 2026 00:34:56 +0000 (0:00:01.112) 0:07:04.960 ***** 2026-01-07 00:35:06.984114 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:06.984125 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:06.984136 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:06.984146 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:06.984157 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:06.984167 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:06.984178 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:06.984189 | orchestrator | 2026-01-07 00:35:06.984220 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-07 00:35:06.984233 | orchestrator | Wednesday 07 January 2026 00:34:57 +0000 (0:00:00.539) 0:07:05.500 ***** 2026-01-07 00:35:06.984244 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.984255 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:06.984266 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:06.984276 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:06.984287 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:06.984298 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:06.984308 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:06.984318 | orchestrator | 2026-01-07 00:35:06.984329 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-07 00:35:06.984340 | orchestrator | Wednesday 07 January 2026 00:34:58 +0000 (0:00:01.552) 0:07:07.053 ***** 2026-01-07 00:35:06.984351 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:06.984362 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:06.984372 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:06.984383 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:06.984393 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:06.984419 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:06.984429 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:06.984468 | orchestrator | 2026-01-07 00:35:06.984480 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-07 00:35:06.984491 | orchestrator | Wednesday 07 January 2026 00:34:59 +0000 (0:00:00.507) 0:07:07.560 ***** 2026-01-07 00:35:06.984501 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:06.984512 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:06.984523 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:06.984534 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:06.984552 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:06.984563 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:06.984583 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:40.131085 | orchestrator | 2026-01-07 00:35:40.131318 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-07 00:35:40.131338 | orchestrator | Wednesday 07 January 2026 00:35:06 +0000 (0:00:07.765) 0:07:15.325 ***** 2026-01-07 00:35:40.131351 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.131363 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:40.131376 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:40.131387 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:40.131398 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:40.131409 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:40.131420 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:40.131431 | orchestrator | 2026-01-07 00:35:40.131442 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-07 00:35:40.131453 | orchestrator | Wednesday 07 January 2026 00:35:08 +0000 (0:00:01.563) 0:07:16.889 ***** 2026-01-07 00:35:40.131464 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.131475 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:40.131486 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:40.131497 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:40.131508 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:40.131519 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:40.131532 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:40.131545 | orchestrator | 2026-01-07 00:35:40.131559 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-07 00:35:40.131575 | orchestrator | Wednesday 07 January 2026 00:35:10 +0000 (0:00:01.772) 0:07:18.662 ***** 2026-01-07 00:35:40.131595 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.131614 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:40.131633 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:40.131650 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:40.131667 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:40.131685 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:40.131702 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:40.131719 | orchestrator | 2026-01-07 00:35:40.131737 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:35:40.131757 | orchestrator | Wednesday 07 January 2026 00:35:11 +0000 (0:00:01.676) 0:07:20.339 ***** 2026-01-07 00:35:40.131777 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.131795 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.131814 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.131833 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.131851 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.131869 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.131888 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.131909 | orchestrator | 2026-01-07 00:35:40.131928 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:35:40.131947 | orchestrator | Wednesday 07 January 2026 00:35:12 +0000 (0:00:00.905) 0:07:21.244 ***** 2026-01-07 00:35:40.131966 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:40.131985 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:40.131999 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:40.132010 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:40.132021 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:40.132032 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:40.132043 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:40.132054 | orchestrator | 2026-01-07 00:35:40.132065 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-07 00:35:40.132077 | orchestrator | Wednesday 07 January 2026 00:35:13 +0000 (0:00:01.031) 0:07:22.276 ***** 2026-01-07 00:35:40.132089 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:40.132128 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:40.132172 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:40.132183 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:40.132215 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:40.132226 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:40.132237 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:40.132248 | orchestrator | 2026-01-07 00:35:40.132259 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-07 00:35:40.132269 | orchestrator | Wednesday 07 January 2026 00:35:14 +0000 (0:00:00.558) 0:07:22.834 ***** 2026-01-07 00:35:40.132281 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.132291 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.132302 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.132313 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.132323 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.132334 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.132344 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.132355 | orchestrator | 2026-01-07 00:35:40.132366 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-07 00:35:40.132377 | orchestrator | Wednesday 07 January 2026 00:35:15 +0000 (0:00:00.570) 0:07:23.405 ***** 2026-01-07 00:35:40.132388 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.132398 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.132409 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.132419 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.132430 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.132441 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.132451 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.132462 | orchestrator | 2026-01-07 00:35:40.132473 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-07 00:35:40.132484 | orchestrator | Wednesday 07 January 2026 00:35:15 +0000 (0:00:00.524) 0:07:23.930 ***** 2026-01-07 00:35:40.132495 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.132505 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.132516 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.132526 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.132537 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.132548 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.132558 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.132569 | orchestrator | 2026-01-07 00:35:40.132580 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-07 00:35:40.132591 | orchestrator | Wednesday 07 January 2026 00:35:16 +0000 (0:00:00.763) 0:07:24.693 ***** 2026-01-07 00:35:40.132602 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.132612 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.132623 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.132634 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.132644 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.132655 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.132665 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.132676 | orchestrator | 2026-01-07 00:35:40.132709 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-07 00:35:40.132721 | orchestrator | Wednesday 07 January 2026 00:35:21 +0000 (0:00:05.603) 0:07:30.296 ***** 2026-01-07 00:35:40.132732 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:40.132743 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:40.132754 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:40.132764 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:40.132775 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:40.132786 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:40.132797 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:40.132807 | orchestrator | 2026-01-07 00:35:40.132818 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-07 00:35:40.132829 | orchestrator | Wednesday 07 January 2026 00:35:22 +0000 (0:00:00.562) 0:07:30.858 ***** 2026-01-07 00:35:40.132843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:40.132865 | orchestrator | 2026-01-07 00:35:40.132877 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-07 00:35:40.132888 | orchestrator | Wednesday 07 January 2026 00:35:23 +0000 (0:00:01.067) 0:07:31.926 ***** 2026-01-07 00:35:40.132898 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.132909 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.132920 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.132931 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.132941 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.132952 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.132963 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.132974 | orchestrator | 2026-01-07 00:35:40.132985 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-07 00:35:40.132996 | orchestrator | Wednesday 07 January 2026 00:35:25 +0000 (0:00:01.922) 0:07:33.848 ***** 2026-01-07 00:35:40.133006 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.133017 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.133028 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.133038 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.133049 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.133060 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.133070 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.133081 | orchestrator | 2026-01-07 00:35:40.133110 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-07 00:35:40.133122 | orchestrator | Wednesday 07 January 2026 00:35:26 +0000 (0:00:01.170) 0:07:35.018 ***** 2026-01-07 00:35:40.133133 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:40.133144 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:40.133154 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:40.133165 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:40.133176 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:40.133186 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:40.133197 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:40.133208 | orchestrator | 2026-01-07 00:35:40.133219 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-07 00:35:40.133230 | orchestrator | Wednesday 07 January 2026 00:35:27 +0000 (0:00:00.929) 0:07:35.948 ***** 2026-01-07 00:35:40.133241 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133253 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133264 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133284 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133295 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133306 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133317 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:40.133328 | orchestrator | 2026-01-07 00:35:40.133339 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-07 00:35:40.133350 | orchestrator | Wednesday 07 January 2026 00:35:29 +0000 (0:00:01.997) 0:07:37.946 ***** 2026-01-07 00:35:40.133361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:40.133379 | orchestrator | 2026-01-07 00:35:40.133391 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-07 00:35:40.133401 | orchestrator | Wednesday 07 January 2026 00:35:30 +0000 (0:00:00.820) 0:07:38.767 ***** 2026-01-07 00:35:40.133412 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:40.133423 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:40.133434 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:40.133445 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:40.133455 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:40.133466 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:40.133477 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:40.133487 | orchestrator | 2026-01-07 00:35:40.133505 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-07 00:36:12.382751 | orchestrator | Wednesday 07 January 2026 00:35:40 +0000 (0:00:09.706) 0:07:48.474 ***** 2026-01-07 00:36:12.382896 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:12.382923 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:12.383050 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:12.383074 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:12.383093 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:12.383113 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:12.383132 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:12.383150 | orchestrator | 2026-01-07 00:36:12.383170 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-07 00:36:12.383189 | orchestrator | Wednesday 07 January 2026 00:35:42 +0000 (0:00:02.058) 0:07:50.533 ***** 2026-01-07 00:36:12.383209 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:12.383228 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:12.383245 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:12.383265 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:12.383285 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:12.383304 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:12.383324 | orchestrator | 2026-01-07 00:36:12.383345 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-07 00:36:12.383365 | orchestrator | Wednesday 07 January 2026 00:35:43 +0000 (0:00:01.324) 0:07:51.857 ***** 2026-01-07 00:36:12.383387 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.383408 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.383429 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.383448 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.383469 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.383488 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.383508 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.383526 | orchestrator | 2026-01-07 00:36:12.383546 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-07 00:36:12.383565 | orchestrator | 2026-01-07 00:36:12.383608 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-07 00:36:12.383628 | orchestrator | Wednesday 07 January 2026 00:35:44 +0000 (0:00:01.336) 0:07:53.193 ***** 2026-01-07 00:36:12.383648 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:36:12.383684 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:12.383703 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:12.383720 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:12.383736 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:12.383753 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:12.383770 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:12.383787 | orchestrator | 2026-01-07 00:36:12.383805 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-07 00:36:12.383823 | orchestrator | 2026-01-07 00:36:12.383876 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-07 00:36:12.383897 | orchestrator | Wednesday 07 January 2026 00:35:45 +0000 (0:00:00.727) 0:07:53.921 ***** 2026-01-07 00:36:12.383978 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.384000 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.384019 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.384037 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.384056 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.384075 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.384093 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.384112 | orchestrator | 2026-01-07 00:36:12.384131 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-07 00:36:12.384152 | orchestrator | Wednesday 07 January 2026 00:35:46 +0000 (0:00:01.379) 0:07:55.300 ***** 2026-01-07 00:36:12.384171 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:12.384189 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:12.384207 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:12.384225 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:12.384265 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:12.384285 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:12.384303 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:12.384345 | orchestrator | 2026-01-07 00:36:12.384364 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-07 00:36:12.384381 | orchestrator | Wednesday 07 January 2026 00:35:49 +0000 (0:00:02.205) 0:07:57.506 ***** 2026-01-07 00:36:12.384400 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:36:12.384497 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:12.384518 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:12.384529 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:12.384540 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:12.384551 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:12.384562 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:12.384573 | orchestrator | 2026-01-07 00:36:12.384584 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-07 00:36:12.384596 | orchestrator | Wednesday 07 January 2026 00:35:49 +0000 (0:00:00.518) 0:07:58.024 ***** 2026-01-07 00:36:12.384607 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:12.384621 | orchestrator | 2026-01-07 00:36:12.384632 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-07 00:36:12.384642 | orchestrator | Wednesday 07 January 2026 00:35:50 +0000 (0:00:01.036) 0:07:59.060 ***** 2026-01-07 00:36:12.384656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:12.384670 | orchestrator | 2026-01-07 00:36:12.384681 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-07 00:36:12.384692 | orchestrator | Wednesday 07 January 2026 00:35:51 +0000 (0:00:00.859) 0:07:59.920 ***** 2026-01-07 00:36:12.384703 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.384714 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.384725 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.384744 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.384763 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.384782 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.384820 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.384842 | orchestrator | 2026-01-07 00:36:12.384895 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-07 00:36:12.384918 | orchestrator | Wednesday 07 January 2026 00:36:00 +0000 (0:00:08.754) 0:08:08.674 ***** 2026-01-07 00:36:12.384962 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.384984 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.384998 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385009 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385036 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385046 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385057 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385068 | orchestrator | 2026-01-07 00:36:12.385078 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-07 00:36:12.385089 | orchestrator | Wednesday 07 January 2026 00:36:01 +0000 (0:00:01.070) 0:08:09.745 ***** 2026-01-07 00:36:12.385100 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.385111 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.385122 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385132 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385143 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385159 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385177 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385197 | orchestrator | 2026-01-07 00:36:12.385216 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-07 00:36:12.385235 | orchestrator | Wednesday 07 January 2026 00:36:02 +0000 (0:00:01.408) 0:08:11.153 ***** 2026-01-07 00:36:12.385254 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.385272 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.385303 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385314 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385325 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385335 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385346 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385356 | orchestrator | 2026-01-07 00:36:12.385367 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-07 00:36:12.385378 | orchestrator | Wednesday 07 January 2026 00:36:04 +0000 (0:00:02.150) 0:08:13.303 ***** 2026-01-07 00:36:12.385389 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.385447 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.385458 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385469 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385479 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385490 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385501 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385511 | orchestrator | 2026-01-07 00:36:12.385522 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-07 00:36:12.385533 | orchestrator | Wednesday 07 January 2026 00:36:06 +0000 (0:00:01.240) 0:08:14.544 ***** 2026-01-07 00:36:12.385544 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.385568 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.385579 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385590 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385601 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385612 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385623 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385634 | orchestrator | 2026-01-07 00:36:12.385645 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-07 00:36:12.385656 | orchestrator | 2026-01-07 00:36:12.385667 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-07 00:36:12.385678 | orchestrator | Wednesday 07 January 2026 00:36:07 +0000 (0:00:01.119) 0:08:15.664 ***** 2026-01-07 00:36:12.385698 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:12.385710 | orchestrator | 2026-01-07 00:36:12.385721 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:36:12.385731 | orchestrator | Wednesday 07 January 2026 00:36:08 +0000 (0:00:00.830) 0:08:16.495 ***** 2026-01-07 00:36:12.385742 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:12.385753 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:12.385764 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:12.385784 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:12.385795 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:12.385806 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:12.385817 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:12.385827 | orchestrator | 2026-01-07 00:36:12.385838 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:36:12.385849 | orchestrator | Wednesday 07 January 2026 00:36:09 +0000 (0:00:01.051) 0:08:17.546 ***** 2026-01-07 00:36:12.385860 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:12.385871 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:12.385881 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:12.385892 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:12.385903 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:12.385913 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:12.385924 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:12.385935 | orchestrator | 2026-01-07 00:36:12.386100 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-07 00:36:12.386112 | orchestrator | Wednesday 07 January 2026 00:36:10 +0000 (0:00:01.217) 0:08:18.763 ***** 2026-01-07 00:36:12.386124 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:12.386135 | orchestrator | 2026-01-07 00:36:12.386146 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:36:12.386157 | orchestrator | Wednesday 07 January 2026 00:36:11 +0000 (0:00:01.034) 0:08:19.798 ***** 2026-01-07 00:36:12.386167 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:12.386178 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:12.386189 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:12.386199 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:12.386210 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:12.386221 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:12.386231 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:12.386242 | orchestrator | 2026-01-07 00:36:12.386267 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:36:14.100127 | orchestrator | Wednesday 07 January 2026 00:36:12 +0000 (0:00:00.925) 0:08:20.723 ***** 2026-01-07 00:36:14.100240 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:14.100257 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:14.100269 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:14.100280 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:14.100291 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:14.100302 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:14.100313 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:14.100324 | orchestrator | 2026-01-07 00:36:14.100336 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:14.100348 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-07 00:36:14.100361 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:14.100373 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:14.100383 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:14.100395 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-07 00:36:14.100405 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:36:14.100444 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:36:14.100456 | orchestrator | 2026-01-07 00:36:14.100467 | orchestrator | 2026-01-07 00:36:14.100478 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:14.100490 | orchestrator | Wednesday 07 January 2026 00:36:13 +0000 (0:00:01.161) 0:08:21.885 ***** 2026-01-07 00:36:14.100501 | orchestrator | =============================================================================== 2026-01-07 00:36:14.100512 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.33s 2026-01-07 00:36:14.100523 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.85s 2026-01-07 00:36:14.100533 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.90s 2026-01-07 00:36:14.100544 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.44s 2026-01-07 00:36:14.100556 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.86s 2026-01-07 00:36:14.100567 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.74s 2026-01-07 00:36:14.100578 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.71s 2026-01-07 00:36:14.100603 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.63s 2026-01-07 00:36:14.100615 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.71s 2026-01-07 00:36:14.100626 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.52s 2026-01-07 00:36:14.100637 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.75s 2026-01-07 00:36:14.100648 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.38s 2026-01-07 00:36:14.100661 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.30s 2026-01-07 00:36:14.100675 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.14s 2026-01-07 00:36:14.100687 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.78s 2026-01-07 00:36:14.100701 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.77s 2026-01-07 00:36:14.100714 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.96s 2026-01-07 00:36:14.100727 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.97s 2026-01-07 00:36:14.100740 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.95s 2026-01-07 00:36:14.100753 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.60s 2026-01-07 00:36:14.454151 | orchestrator | + osism apply fail2ban 2026-01-07 00:36:27.213348 | orchestrator | 2026-01-07 00:36:27 | INFO  | Task 3535cdef-6734-4f72-a7c0-82488c608d4f (fail2ban) was prepared for execution. 2026-01-07 00:36:27.213448 | orchestrator | 2026-01-07 00:36:27 | INFO  | It takes a moment until task 3535cdef-6734-4f72-a7c0-82488c608d4f (fail2ban) has been started and output is visible here. 2026-01-07 00:36:49.715473 | orchestrator | 2026-01-07 00:36:49.715599 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-07 00:36:49.715617 | orchestrator | 2026-01-07 00:36:49.715629 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-07 00:36:49.715641 | orchestrator | Wednesday 07 January 2026 00:36:32 +0000 (0:00:00.292) 0:00:00.292 ***** 2026-01-07 00:36:49.715654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:36:49.715669 | orchestrator | 2026-01-07 00:36:49.715680 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-07 00:36:49.715691 | orchestrator | Wednesday 07 January 2026 00:36:33 +0000 (0:00:01.189) 0:00:01.482 ***** 2026-01-07 00:36:49.715731 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:49.715743 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:49.715754 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:49.715765 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:49.715776 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:49.715786 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:49.715797 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:49.715872 | orchestrator | 2026-01-07 00:36:49.715886 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-07 00:36:49.715897 | orchestrator | Wednesday 07 January 2026 00:36:44 +0000 (0:00:11.517) 0:00:12.999 ***** 2026-01-07 00:36:49.715907 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:49.715918 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:49.715929 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:49.715939 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:49.715950 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:49.715960 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:49.715971 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:49.715982 | orchestrator | 2026-01-07 00:36:49.715993 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-07 00:36:49.716004 | orchestrator | Wednesday 07 January 2026 00:36:46 +0000 (0:00:01.471) 0:00:14.471 ***** 2026-01-07 00:36:49.716017 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:49.716031 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:49.716044 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:49.716056 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:49.716068 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:49.716080 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:49.716092 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:49.716104 | orchestrator | 2026-01-07 00:36:49.716117 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-07 00:36:49.716131 | orchestrator | Wednesday 07 January 2026 00:36:47 +0000 (0:00:01.470) 0:00:15.942 ***** 2026-01-07 00:36:49.716143 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:49.716155 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:49.716168 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:49.716180 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:49.716192 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:49.716204 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:49.716217 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:49.716230 | orchestrator | 2026-01-07 00:36:49.716242 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:49.716255 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716270 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716282 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716311 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716325 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716337 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716350 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:49.716363 | orchestrator | 2026-01-07 00:36:49.716376 | orchestrator | 2026-01-07 00:36:49.716389 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:49.716409 | orchestrator | Wednesday 07 January 2026 00:36:49 +0000 (0:00:01.598) 0:00:17.540 ***** 2026-01-07 00:36:49.716420 | orchestrator | =============================================================================== 2026-01-07 00:36:49.716431 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.52s 2026-01-07 00:36:49.716441 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-01-07 00:36:49.716452 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.47s 2026-01-07 00:36:49.716463 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-01-07 00:36:49.716474 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.19s 2026-01-07 00:36:50.027683 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-07 00:36:50.027790 | orchestrator | + osism apply network 2026-01-07 00:37:02.276265 | orchestrator | 2026-01-07 00:37:02 | INFO  | Task c86fb46a-d108-46f4-bb56-13b121d43147 (network) was prepared for execution. 2026-01-07 00:37:02.276387 | orchestrator | 2026-01-07 00:37:02 | INFO  | It takes a moment until task c86fb46a-d108-46f4-bb56-13b121d43147 (network) has been started and output is visible here. 2026-01-07 00:37:31.524026 | orchestrator | 2026-01-07 00:37:31.524184 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-07 00:37:31.524215 | orchestrator | 2026-01-07 00:37:31.524236 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-07 00:37:31.524256 | orchestrator | Wednesday 07 January 2026 00:37:06 +0000 (0:00:00.322) 0:00:00.322 ***** 2026-01-07 00:37:31.524275 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.524296 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.524310 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.524322 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.524333 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.524345 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.524355 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.524366 | orchestrator | 2026-01-07 00:37:31.524378 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-07 00:37:31.524389 | orchestrator | Wednesday 07 January 2026 00:37:07 +0000 (0:00:00.755) 0:00:01.077 ***** 2026-01-07 00:37:31.524403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:31.524417 | orchestrator | 2026-01-07 00:37:31.524428 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-07 00:37:31.524439 | orchestrator | Wednesday 07 January 2026 00:37:08 +0000 (0:00:01.188) 0:00:02.266 ***** 2026-01-07 00:37:31.524450 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.524461 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.524472 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.524483 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.524494 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.524504 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.524515 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.524528 | orchestrator | 2026-01-07 00:37:31.524542 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-07 00:37:31.524554 | orchestrator | Wednesday 07 January 2026 00:37:10 +0000 (0:00:02.163) 0:00:04.429 ***** 2026-01-07 00:37:31.524567 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.524579 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.524592 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.524605 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.524617 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.524628 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.524639 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.524650 | orchestrator | 2026-01-07 00:37:31.524661 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-07 00:37:31.524700 | orchestrator | Wednesday 07 January 2026 00:37:12 +0000 (0:00:01.788) 0:00:06.218 ***** 2026-01-07 00:37:31.524790 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-07 00:37:31.524810 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-07 00:37:31.524821 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-07 00:37:31.524832 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-07 00:37:31.524843 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-07 00:37:31.524854 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-07 00:37:31.524867 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-07 00:37:31.524886 | orchestrator | 2026-01-07 00:37:31.524904 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-07 00:37:31.524924 | orchestrator | Wednesday 07 January 2026 00:37:13 +0000 (0:00:00.982) 0:00:07.201 ***** 2026-01-07 00:37:31.524937 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:37:31.524948 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:37:31.524959 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:37:31.524970 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:37:31.524981 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:37:31.524991 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:37:31.525002 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:37:31.525012 | orchestrator | 2026-01-07 00:37:31.525023 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-07 00:37:31.525034 | orchestrator | Wednesday 07 January 2026 00:37:16 +0000 (0:00:03.272) 0:00:10.474 ***** 2026-01-07 00:37:31.525045 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:31.525056 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:31.525067 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:31.525078 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:31.525088 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:31.525099 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:31.525109 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:31.525120 | orchestrator | 2026-01-07 00:37:31.525131 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-07 00:37:31.525142 | orchestrator | Wednesday 07 January 2026 00:37:18 +0000 (0:00:01.620) 0:00:12.094 ***** 2026-01-07 00:37:31.525153 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:37:31.525163 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:37:31.525174 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:37:31.525184 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:37:31.525195 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:37:31.525206 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:37:31.525217 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:37:31.525227 | orchestrator | 2026-01-07 00:37:31.525238 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-07 00:37:31.525249 | orchestrator | Wednesday 07 January 2026 00:37:20 +0000 (0:00:01.832) 0:00:13.926 ***** 2026-01-07 00:37:31.525260 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.525271 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.525281 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.525292 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.525302 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.525313 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.525324 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.525334 | orchestrator | 2026-01-07 00:37:31.525345 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-07 00:37:31.525378 | orchestrator | Wednesday 07 January 2026 00:37:21 +0000 (0:00:01.153) 0:00:15.080 ***** 2026-01-07 00:37:31.525390 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:31.525400 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:31.525411 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:31.525432 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:31.525443 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:31.525453 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:31.525464 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:31.525475 | orchestrator | 2026-01-07 00:37:31.525486 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-07 00:37:31.525497 | orchestrator | Wednesday 07 January 2026 00:37:22 +0000 (0:00:00.678) 0:00:15.758 ***** 2026-01-07 00:37:31.525508 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.525518 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.525529 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.525540 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.525551 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.525561 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.525572 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.525583 | orchestrator | 2026-01-07 00:37:31.525594 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-07 00:37:31.525604 | orchestrator | Wednesday 07 January 2026 00:37:24 +0000 (0:00:02.244) 0:00:18.003 ***** 2026-01-07 00:37:31.525615 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:31.525626 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:31.525637 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:31.525648 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:31.525658 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:31.525669 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:31.525681 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-07 00:37:31.525693 | orchestrator | 2026-01-07 00:37:31.525704 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-07 00:37:31.525738 | orchestrator | Wednesday 07 January 2026 00:37:25 +0000 (0:00:00.949) 0:00:18.952 ***** 2026-01-07 00:37:31.525757 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.525777 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:31.525795 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:31.525809 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:31.525820 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:31.525830 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:31.525841 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:31.525852 | orchestrator | 2026-01-07 00:37:31.525863 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-07 00:37:31.525874 | orchestrator | Wednesday 07 January 2026 00:37:27 +0000 (0:00:01.737) 0:00:20.690 ***** 2026-01-07 00:37:31.525885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:31.525899 | orchestrator | 2026-01-07 00:37:31.525910 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:37:31.525921 | orchestrator | Wednesday 07 January 2026 00:37:28 +0000 (0:00:01.319) 0:00:22.010 ***** 2026-01-07 00:37:31.525932 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.525943 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.525973 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.525985 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.525996 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.526007 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.526079 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.526094 | orchestrator | 2026-01-07 00:37:31.526105 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-07 00:37:31.526121 | orchestrator | Wednesday 07 January 2026 00:37:29 +0000 (0:00:01.129) 0:00:23.140 ***** 2026-01-07 00:37:31.526133 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:31.526144 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:31.526155 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:31.526175 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:31.526186 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:31.526197 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:31.526207 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:31.526218 | orchestrator | 2026-01-07 00:37:31.526229 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:37:31.526240 | orchestrator | Wednesday 07 January 2026 00:37:30 +0000 (0:00:00.647) 0:00:23.787 ***** 2026-01-07 00:37:31.526251 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526262 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526273 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526284 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526294 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526305 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526316 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526327 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526337 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526348 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526359 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526370 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:31.526381 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526392 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:31.526403 | orchestrator | 2026-01-07 00:37:31.526424 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-07 00:37:48.771542 | orchestrator | Wednesday 07 January 2026 00:37:31 +0000 (0:00:01.271) 0:00:25.058 ***** 2026-01-07 00:37:48.771662 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:48.771715 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:48.771727 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:48.771739 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:48.771750 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:48.771761 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:48.771773 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:48.771784 | orchestrator | 2026-01-07 00:37:48.771796 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-07 00:37:48.771808 | orchestrator | Wednesday 07 January 2026 00:37:32 +0000 (0:00:00.643) 0:00:25.701 ***** 2026-01-07 00:37:48.771821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2026-01-07 00:37:48.771835 | orchestrator | 2026-01-07 00:37:48.771846 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-07 00:37:48.771857 | orchestrator | Wednesday 07 January 2026 00:37:36 +0000 (0:00:04.817) 0:00:30.519 ***** 2026-01-07 00:37:48.771870 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.771993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772004 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772113 | orchestrator | 2026-01-07 00:37:48.772126 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-07 00:37:48.772139 | orchestrator | Wednesday 07 January 2026 00:37:42 +0000 (0:00:05.966) 0:00:36.486 ***** 2026-01-07 00:37:48.772152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772188 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:48.772283 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:48.772333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:38:02.023531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:38:02.023689 | orchestrator | 2026-01-07 00:38:02.023708 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-07 00:38:02.023719 | orchestrator | Wednesday 07 January 2026 00:37:48 +0000 (0:00:05.810) 0:00:42.297 ***** 2026-01-07 00:38:02.023757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:38:02.023768 | orchestrator | 2026-01-07 00:38:02.023778 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:38:02.023788 | orchestrator | Wednesday 07 January 2026 00:37:50 +0000 (0:00:01.330) 0:00:43.628 ***** 2026-01-07 00:38:02.023798 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:02.023809 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:02.023824 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:02.023841 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:02.023858 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:02.023873 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:02.023890 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:02.023907 | orchestrator | 2026-01-07 00:38:02.023924 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:38:02.023940 | orchestrator | Wednesday 07 January 2026 00:37:51 +0000 (0:00:01.149) 0:00:44.777 ***** 2026-01-07 00:38:02.023957 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.023976 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.023993 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024010 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024028 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024038 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024048 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024057 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024068 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.024080 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024093 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024104 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024116 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.024143 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024155 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024167 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024179 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024190 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024202 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.024214 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024226 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024237 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024249 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024260 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.024271 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024283 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024304 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024316 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024328 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.024340 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.024352 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:38:02.024363 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:38:02.024375 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:38:02.024387 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:38:02.024399 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.024410 | orchestrator | 2026-01-07 00:38:02.024422 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-07 00:38:02.024451 | orchestrator | Wednesday 07 January 2026 00:37:52 +0000 (0:00:00.946) 0:00:45.723 ***** 2026-01-07 00:38:02.024463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:38:02.024473 | orchestrator | 2026-01-07 00:38:02.024482 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-07 00:38:02.024492 | orchestrator | Wednesday 07 January 2026 00:37:53 +0000 (0:00:01.262) 0:00:46.986 ***** 2026-01-07 00:38:02.024501 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.024511 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.024521 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.024530 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.024540 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.024549 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.024558 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.024568 | orchestrator | 2026-01-07 00:38:02.024577 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-07 00:38:02.024587 | orchestrator | Wednesday 07 January 2026 00:37:54 +0000 (0:00:00.653) 0:00:47.639 ***** 2026-01-07 00:38:02.024596 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.024606 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.024615 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.024625 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.024634 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.024672 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.024683 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.024693 | orchestrator | 2026-01-07 00:38:02.024702 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-07 00:38:02.024712 | orchestrator | Wednesday 07 January 2026 00:37:54 +0000 (0:00:00.830) 0:00:48.469 ***** 2026-01-07 00:38:02.024721 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.024731 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.024740 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.024750 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.024759 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.024768 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.024778 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.024787 | orchestrator | 2026-01-07 00:38:02.024796 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-07 00:38:02.024806 | orchestrator | Wednesday 07 January 2026 00:37:55 +0000 (0:00:00.619) 0:00:49.089 ***** 2026-01-07 00:38:02.024816 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:02.024825 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:02.024835 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:02.024845 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:02.024854 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:02.024871 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:02.024880 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:02.024890 | orchestrator | 2026-01-07 00:38:02.024900 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-07 00:38:02.024909 | orchestrator | Wednesday 07 January 2026 00:37:57 +0000 (0:00:01.702) 0:00:50.792 ***** 2026-01-07 00:38:02.024919 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:02.024928 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:02.024938 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:02.024948 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:02.024957 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:02.024966 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:02.024976 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:02.024985 | orchestrator | 2026-01-07 00:38:02.025000 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-07 00:38:02.025010 | orchestrator | Wednesday 07 January 2026 00:37:58 +0000 (0:00:01.011) 0:00:51.803 ***** 2026-01-07 00:38:02.025020 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:02.025029 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:02.025039 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:02.025048 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:02.025057 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:02.025067 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:02.025076 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:02.025086 | orchestrator | 2026-01-07 00:38:02.025095 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-07 00:38:02.025105 | orchestrator | Wednesday 07 January 2026 00:38:00 +0000 (0:00:02.340) 0:00:54.143 ***** 2026-01-07 00:38:02.025114 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.025124 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.025133 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.025143 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.025152 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.025162 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.025171 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.025180 | orchestrator | 2026-01-07 00:38:02.025190 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-07 00:38:02.025200 | orchestrator | Wednesday 07 January 2026 00:38:01 +0000 (0:00:00.876) 0:00:55.020 ***** 2026-01-07 00:38:02.025209 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:02.025219 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:02.025228 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:02.025238 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:02.025247 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:02.025256 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:02.025266 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:02.025275 | orchestrator | 2026-01-07 00:38:02.025285 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:38:02.025296 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 00:38:02.025307 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.025323 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.448136 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.448241 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.448256 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.448293 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:38:02.448306 | orchestrator | 2026-01-07 00:38:02.448317 | orchestrator | 2026-01-07 00:38:02.448329 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:38:02.448341 | orchestrator | Wednesday 07 January 2026 00:38:02 +0000 (0:00:00.544) 0:00:55.564 ***** 2026-01-07 00:38:02.448352 | orchestrator | =============================================================================== 2026-01-07 00:38:02.448363 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.97s 2026-01-07 00:38:02.448374 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.81s 2026-01-07 00:38:02.448385 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.82s 2026-01-07 00:38:02.448396 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.27s 2026-01-07 00:38:02.448412 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.34s 2026-01-07 00:38:02.448431 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2026-01-07 00:38:02.448449 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.16s 2026-01-07 00:38:02.448467 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-01-07 00:38:02.448484 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-01-07 00:38:02.448501 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-01-07 00:38:02.448520 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.70s 2026-01-07 00:38:02.448537 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.62s 2026-01-07 00:38:02.448555 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2026-01-07 00:38:02.448572 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-01-07 00:38:02.448589 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.27s 2026-01-07 00:38:02.448608 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.26s 2026-01-07 00:38:02.448626 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2026-01-07 00:38:02.448697 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2026-01-07 00:38:02.448720 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-01-07 00:38:02.448739 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-01-07 00:38:02.776418 | orchestrator | + osism apply wireguard 2026-01-07 00:38:14.884566 | orchestrator | 2026-01-07 00:38:14 | INFO  | Task 58f19dd3-a327-4938-9b62-dabc0f91d742 (wireguard) was prepared for execution. 2026-01-07 00:38:14.884747 | orchestrator | 2026-01-07 00:38:14 | INFO  | It takes a moment until task 58f19dd3-a327-4938-9b62-dabc0f91d742 (wireguard) has been started and output is visible here. 2026-01-07 00:38:36.605094 | orchestrator | 2026-01-07 00:38:36.605219 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-07 00:38:36.605237 | orchestrator | 2026-01-07 00:38:36.605249 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-07 00:38:36.605261 | orchestrator | Wednesday 07 January 2026 00:38:19 +0000 (0:00:00.220) 0:00:00.220 ***** 2026-01-07 00:38:36.605272 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:36.605291 | orchestrator | 2026-01-07 00:38:36.605315 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-07 00:38:36.605335 | orchestrator | Wednesday 07 January 2026 00:38:20 +0000 (0:00:01.610) 0:00:01.830 ***** 2026-01-07 00:38:36.605354 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605402 | orchestrator | 2026-01-07 00:38:36.605415 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-07 00:38:36.605427 | orchestrator | Wednesday 07 January 2026 00:38:27 +0000 (0:00:06.862) 0:00:08.693 ***** 2026-01-07 00:38:36.605438 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605449 | orchestrator | 2026-01-07 00:38:36.605460 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-07 00:38:36.605471 | orchestrator | Wednesday 07 January 2026 00:38:28 +0000 (0:00:00.551) 0:00:09.244 ***** 2026-01-07 00:38:36.605482 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605492 | orchestrator | 2026-01-07 00:38:36.605503 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-07 00:38:36.605514 | orchestrator | Wednesday 07 January 2026 00:38:28 +0000 (0:00:00.457) 0:00:09.701 ***** 2026-01-07 00:38:36.605525 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:36.605536 | orchestrator | 2026-01-07 00:38:36.605546 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-07 00:38:36.605557 | orchestrator | Wednesday 07 January 2026 00:38:29 +0000 (0:00:00.676) 0:00:10.377 ***** 2026-01-07 00:38:36.605625 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:36.605640 | orchestrator | 2026-01-07 00:38:36.605652 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-07 00:38:36.605665 | orchestrator | Wednesday 07 January 2026 00:38:29 +0000 (0:00:00.417) 0:00:10.795 ***** 2026-01-07 00:38:36.605677 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:36.605689 | orchestrator | 2026-01-07 00:38:36.605702 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-07 00:38:36.605715 | orchestrator | Wednesday 07 January 2026 00:38:30 +0000 (0:00:00.420) 0:00:11.215 ***** 2026-01-07 00:38:36.605727 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605739 | orchestrator | 2026-01-07 00:38:36.605753 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-07 00:38:36.605766 | orchestrator | Wednesday 07 January 2026 00:38:31 +0000 (0:00:01.220) 0:00:12.436 ***** 2026-01-07 00:38:36.605779 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:38:36.605791 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605804 | orchestrator | 2026-01-07 00:38:36.605818 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-07 00:38:36.605831 | orchestrator | Wednesday 07 January 2026 00:38:32 +0000 (0:00:00.954) 0:00:13.391 ***** 2026-01-07 00:38:36.605844 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605857 | orchestrator | 2026-01-07 00:38:36.605876 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-07 00:38:36.605896 | orchestrator | Wednesday 07 January 2026 00:38:35 +0000 (0:00:02.744) 0:00:16.135 ***** 2026-01-07 00:38:36.605915 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:36.605932 | orchestrator | 2026-01-07 00:38:36.605952 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:38:36.605971 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:38:36.605989 | orchestrator | 2026-01-07 00:38:36.606008 | orchestrator | 2026-01-07 00:38:36.606097 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:38:36.606117 | orchestrator | Wednesday 07 January 2026 00:38:36 +0000 (0:00:01.000) 0:00:17.136 ***** 2026-01-07 00:38:36.606136 | orchestrator | =============================================================================== 2026-01-07 00:38:36.606155 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.86s 2026-01-07 00:38:36.606173 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.74s 2026-01-07 00:38:36.606193 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2026-01-07 00:38:36.606211 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-01-07 00:38:36.606246 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2026-01-07 00:38:36.606265 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2026-01-07 00:38:36.606276 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-07 00:38:36.606287 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-01-07 00:38:36.606298 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-01-07 00:38:36.606309 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-01-07 00:38:36.606321 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-01-07 00:38:36.940856 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-07 00:38:36.979377 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-07 00:38:36.979473 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-07 00:38:37.053422 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 189 0 --:--:-- --:--:-- --:--:-- 191 2026-01-07 00:38:37.068278 | orchestrator | + osism apply --environment custom workarounds 2026-01-07 00:38:39.067363 | orchestrator | 2026-01-07 00:38:39 | INFO  | Trying to run play workarounds in environment custom 2026-01-07 00:38:49.258274 | orchestrator | 2026-01-07 00:38:49 | INFO  | Task 80bd677f-1d96-4309-8cad-38094170bf21 (workarounds) was prepared for execution. 2026-01-07 00:38:49.258402 | orchestrator | 2026-01-07 00:38:49 | INFO  | It takes a moment until task 80bd677f-1d96-4309-8cad-38094170bf21 (workarounds) has been started and output is visible here. 2026-01-07 00:39:14.874131 | orchestrator | 2026-01-07 00:39:14.874263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:39:14.874281 | orchestrator | 2026-01-07 00:39:14.874293 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-07 00:39:14.874305 | orchestrator | Wednesday 07 January 2026 00:38:53 +0000 (0:00:00.131) 0:00:00.131 ***** 2026-01-07 00:39:14.874316 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874327 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874338 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874349 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874360 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874371 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874381 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-07 00:39:14.874392 | orchestrator | 2026-01-07 00:39:14.874403 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-07 00:39:14.874413 | orchestrator | 2026-01-07 00:39:14.874424 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:39:14.874435 | orchestrator | Wednesday 07 January 2026 00:38:54 +0000 (0:00:00.863) 0:00:00.995 ***** 2026-01-07 00:39:14.874446 | orchestrator | ok: [testbed-manager] 2026-01-07 00:39:14.874459 | orchestrator | 2026-01-07 00:39:14.874469 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-07 00:39:14.874480 | orchestrator | 2026-01-07 00:39:14.874539 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:39:14.874551 | orchestrator | Wednesday 07 January 2026 00:38:57 +0000 (0:00:02.647) 0:00:03.642 ***** 2026-01-07 00:39:14.874562 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:14.874573 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:14.874584 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:14.874595 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:39:14.874605 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:39:14.874639 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:39:14.874652 | orchestrator | 2026-01-07 00:39:14.874664 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-07 00:39:14.874676 | orchestrator | 2026-01-07 00:39:14.874689 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-07 00:39:14.874701 | orchestrator | Wednesday 07 January 2026 00:38:58 +0000 (0:00:01.829) 0:00:05.471 ***** 2026-01-07 00:39:14.874714 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874727 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874740 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874752 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874765 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874777 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:39:14.874790 | orchestrator | 2026-01-07 00:39:14.874802 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-07 00:39:14.874815 | orchestrator | Wednesday 07 January 2026 00:39:00 +0000 (0:00:01.534) 0:00:07.006 ***** 2026-01-07 00:39:14.874827 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:14.874840 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:14.874853 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:14.874864 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:14.874877 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:14.874889 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:14.874901 | orchestrator | 2026-01-07 00:39:14.874915 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-07 00:39:14.874927 | orchestrator | Wednesday 07 January 2026 00:39:04 +0000 (0:00:03.801) 0:00:10.807 ***** 2026-01-07 00:39:14.874939 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:14.874952 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:14.874965 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:14.874982 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:14.874993 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:14.875004 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:14.875014 | orchestrator | 2026-01-07 00:39:14.875025 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-07 00:39:14.875036 | orchestrator | 2026-01-07 00:39:14.875047 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-07 00:39:14.875058 | orchestrator | Wednesday 07 January 2026 00:39:04 +0000 (0:00:00.710) 0:00:11.518 ***** 2026-01-07 00:39:14.875068 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:14.875079 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:14.875090 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:14.875100 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:14.875111 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:14.875121 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:14.875132 | orchestrator | changed: [testbed-manager] 2026-01-07 00:39:14.875142 | orchestrator | 2026-01-07 00:39:14.875153 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-07 00:39:14.875164 | orchestrator | Wednesday 07 January 2026 00:39:06 +0000 (0:00:01.608) 0:00:13.126 ***** 2026-01-07 00:39:14.875175 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:14.875185 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:14.875196 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:14.875207 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:14.875217 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:14.875228 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:14.875263 | orchestrator | changed: [testbed-manager] 2026-01-07 00:39:14.875275 | orchestrator | 2026-01-07 00:39:14.875286 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-07 00:39:14.875297 | orchestrator | Wednesday 07 January 2026 00:39:08 +0000 (0:00:01.563) 0:00:14.690 ***** 2026-01-07 00:39:14.875308 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:39:14.875319 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:14.875329 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:14.875340 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:39:14.875350 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:14.875361 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:39:14.875372 | orchestrator | ok: [testbed-manager] 2026-01-07 00:39:14.875382 | orchestrator | 2026-01-07 00:39:14.875393 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-07 00:39:14.875404 | orchestrator | Wednesday 07 January 2026 00:39:09 +0000 (0:00:01.522) 0:00:16.212 ***** 2026-01-07 00:39:14.875415 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:14.875426 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:14.875436 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:14.875447 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:14.875457 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:14.875468 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:14.875478 | orchestrator | changed: [testbed-manager] 2026-01-07 00:39:14.875508 | orchestrator | 2026-01-07 00:39:14.875519 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-07 00:39:14.875530 | orchestrator | Wednesday 07 January 2026 00:39:11 +0000 (0:00:01.786) 0:00:17.999 ***** 2026-01-07 00:39:14.875541 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:14.875551 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:14.875562 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:14.875572 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:14.875583 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:14.875593 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:14.875604 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:39:14.875615 | orchestrator | 2026-01-07 00:39:14.875626 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-07 00:39:14.875636 | orchestrator | 2026-01-07 00:39:14.875647 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-07 00:39:14.875658 | orchestrator | Wednesday 07 January 2026 00:39:11 +0000 (0:00:00.601) 0:00:18.600 ***** 2026-01-07 00:39:14.875669 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:14.875679 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:14.875690 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:14.875701 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:39:14.875711 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:39:14.875722 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:39:14.875732 | orchestrator | ok: [testbed-manager] 2026-01-07 00:39:14.875743 | orchestrator | 2026-01-07 00:39:14.875754 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:14.875766 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:14.875778 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875788 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875799 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875810 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875828 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875838 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:14.875849 | orchestrator | 2026-01-07 00:39:14.875860 | orchestrator | 2026-01-07 00:39:14.875871 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:14.875887 | orchestrator | Wednesday 07 January 2026 00:39:14 +0000 (0:00:02.868) 0:00:21.468 ***** 2026-01-07 00:39:14.875898 | orchestrator | =============================================================================== 2026-01-07 00:39:14.875909 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2026-01-07 00:39:14.875920 | orchestrator | Install python3-docker -------------------------------------------------- 2.87s 2026-01-07 00:39:14.875930 | orchestrator | Apply netplan configuration --------------------------------------------- 2.65s 2026-01-07 00:39:14.875941 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-01-07 00:39:14.875952 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-01-07 00:39:14.875962 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2026-01-07 00:39:14.875973 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.56s 2026-01-07 00:39:14.875984 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2026-01-07 00:39:14.875994 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-01-07 00:39:14.876005 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-01-07 00:39:14.876016 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2026-01-07 00:39:14.876033 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-01-07 00:39:15.558538 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:39:27.634791 | orchestrator | 2026-01-07 00:39:27 | INFO  | Task f26ade24-1636-4a2a-9516-96d8854b0876 (reboot) was prepared for execution. 2026-01-07 00:39:27.634870 | orchestrator | 2026-01-07 00:39:27 | INFO  | It takes a moment until task f26ade24-1636-4a2a-9516-96d8854b0876 (reboot) has been started and output is visible here. 2026-01-07 00:39:38.422002 | orchestrator | 2026-01-07 00:39:38.422159 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422169 | orchestrator | 2026-01-07 00:39:38.422173 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422178 | orchestrator | Wednesday 07 January 2026 00:39:32 +0000 (0:00:00.211) 0:00:00.211 ***** 2026-01-07 00:39:38.422183 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:38.422187 | orchestrator | 2026-01-07 00:39:38.422191 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422196 | orchestrator | Wednesday 07 January 2026 00:39:32 +0000 (0:00:00.102) 0:00:00.314 ***** 2026-01-07 00:39:38.422200 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:38.422204 | orchestrator | 2026-01-07 00:39:38.422208 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422212 | orchestrator | Wednesday 07 January 2026 00:39:33 +0000 (0:00:00.978) 0:00:01.292 ***** 2026-01-07 00:39:38.422216 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:38.422219 | orchestrator | 2026-01-07 00:39:38.422223 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422227 | orchestrator | 2026-01-07 00:39:38.422231 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422235 | orchestrator | Wednesday 07 January 2026 00:39:33 +0000 (0:00:00.121) 0:00:01.414 ***** 2026-01-07 00:39:38.422239 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:38.422261 | orchestrator | 2026-01-07 00:39:38.422265 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422269 | orchestrator | Wednesday 07 January 2026 00:39:33 +0000 (0:00:00.100) 0:00:01.515 ***** 2026-01-07 00:39:38.422273 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:38.422277 | orchestrator | 2026-01-07 00:39:38.422281 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422284 | orchestrator | Wednesday 07 January 2026 00:39:34 +0000 (0:00:00.652) 0:00:02.167 ***** 2026-01-07 00:39:38.422288 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:38.422292 | orchestrator | 2026-01-07 00:39:38.422296 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422299 | orchestrator | 2026-01-07 00:39:38.422303 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422307 | orchestrator | Wednesday 07 January 2026 00:39:34 +0000 (0:00:00.116) 0:00:02.283 ***** 2026-01-07 00:39:38.422311 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:38.422314 | orchestrator | 2026-01-07 00:39:38.422318 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422322 | orchestrator | Wednesday 07 January 2026 00:39:34 +0000 (0:00:00.220) 0:00:02.504 ***** 2026-01-07 00:39:38.422326 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:38.422330 | orchestrator | 2026-01-07 00:39:38.422333 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422338 | orchestrator | Wednesday 07 January 2026 00:39:35 +0000 (0:00:00.729) 0:00:03.233 ***** 2026-01-07 00:39:38.422341 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:38.422345 | orchestrator | 2026-01-07 00:39:38.422349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422353 | orchestrator | 2026-01-07 00:39:38.422356 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422360 | orchestrator | Wednesday 07 January 2026 00:39:35 +0000 (0:00:00.126) 0:00:03.359 ***** 2026-01-07 00:39:38.422364 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:38.422368 | orchestrator | 2026-01-07 00:39:38.422371 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422375 | orchestrator | Wednesday 07 January 2026 00:39:35 +0000 (0:00:00.099) 0:00:03.459 ***** 2026-01-07 00:39:38.422379 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:38.422383 | orchestrator | 2026-01-07 00:39:38.422386 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422400 | orchestrator | Wednesday 07 January 2026 00:39:36 +0000 (0:00:00.722) 0:00:04.181 ***** 2026-01-07 00:39:38.422404 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:38.422408 | orchestrator | 2026-01-07 00:39:38.422412 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422416 | orchestrator | 2026-01-07 00:39:38.422420 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422423 | orchestrator | Wednesday 07 January 2026 00:39:36 +0000 (0:00:00.115) 0:00:04.297 ***** 2026-01-07 00:39:38.422427 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:38.422431 | orchestrator | 2026-01-07 00:39:38.422435 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422513 | orchestrator | Wednesday 07 January 2026 00:39:36 +0000 (0:00:00.101) 0:00:04.398 ***** 2026-01-07 00:39:38.422520 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:38.422526 | orchestrator | 2026-01-07 00:39:38.422532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422538 | orchestrator | Wednesday 07 January 2026 00:39:37 +0000 (0:00:00.781) 0:00:05.180 ***** 2026-01-07 00:39:38.422545 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:38.422552 | orchestrator | 2026-01-07 00:39:38.422558 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:38.422565 | orchestrator | 2026-01-07 00:39:38.422571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:38.422583 | orchestrator | Wednesday 07 January 2026 00:39:37 +0000 (0:00:00.129) 0:00:05.310 ***** 2026-01-07 00:39:38.422587 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:38.422592 | orchestrator | 2026-01-07 00:39:38.422596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:38.422601 | orchestrator | Wednesday 07 January 2026 00:39:37 +0000 (0:00:00.114) 0:00:05.425 ***** 2026-01-07 00:39:38.422605 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:38.422609 | orchestrator | 2026-01-07 00:39:38.422614 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:38.422618 | orchestrator | Wednesday 07 January 2026 00:39:38 +0000 (0:00:00.674) 0:00:06.100 ***** 2026-01-07 00:39:38.422636 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:38.422641 | orchestrator | 2026-01-07 00:39:38.422646 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:38.422651 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422657 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422662 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422666 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422671 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422675 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:38.422679 | orchestrator | 2026-01-07 00:39:38.422684 | orchestrator | 2026-01-07 00:39:38.422689 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:38.422693 | orchestrator | Wednesday 07 January 2026 00:39:38 +0000 (0:00:00.045) 0:00:06.145 ***** 2026-01-07 00:39:38.422698 | orchestrator | =============================================================================== 2026-01-07 00:39:38.422702 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.54s 2026-01-07 00:39:38.422707 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-01-07 00:39:38.422711 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-07 00:39:38.740120 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:39:50.897133 | orchestrator | 2026-01-07 00:39:50 | INFO  | Task 6b68d58c-340f-4827-b488-c9de2835a165 (wait-for-connection) was prepared for execution. 2026-01-07 00:39:50.897239 | orchestrator | 2026-01-07 00:39:50 | INFO  | It takes a moment until task 6b68d58c-340f-4827-b488-c9de2835a165 (wait-for-connection) has been started and output is visible here. 2026-01-07 00:40:07.183219 | orchestrator | 2026-01-07 00:40:07.183335 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-07 00:40:07.183352 | orchestrator | 2026-01-07 00:40:07.183365 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-07 00:40:07.183376 | orchestrator | Wednesday 07 January 2026 00:39:55 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-01-07 00:40:07.183388 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:07.183424 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:07.183436 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:07.183447 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:07.183458 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:07.183492 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:07.183504 | orchestrator | 2026-01-07 00:40:07.183515 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:40:07.183527 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183552 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183564 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183576 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183587 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183598 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:07.183609 | orchestrator | 2026-01-07 00:40:07.183620 | orchestrator | 2026-01-07 00:40:07.183631 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:40:07.183642 | orchestrator | Wednesday 07 January 2026 00:40:06 +0000 (0:00:11.559) 0:00:11.816 ***** 2026-01-07 00:40:07.183653 | orchestrator | =============================================================================== 2026-01-07 00:40:07.183663 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2026-01-07 00:40:07.512221 | orchestrator | + osism apply hddtemp 2026-01-07 00:40:19.638356 | orchestrator | 2026-01-07 00:40:19 | INFO  | Task 2d09ec7b-a7c0-4994-bd13-248613fd9a7d (hddtemp) was prepared for execution. 2026-01-07 00:40:19.638525 | orchestrator | 2026-01-07 00:40:19 | INFO  | It takes a moment until task 2d09ec7b-a7c0-4994-bd13-248613fd9a7d (hddtemp) has been started and output is visible here. 2026-01-07 00:40:49.570581 | orchestrator | 2026-01-07 00:40:49.570754 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-07 00:40:49.570774 | orchestrator | 2026-01-07 00:40:49.570786 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-07 00:40:49.570798 | orchestrator | Wednesday 07 January 2026 00:40:24 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-01-07 00:40:49.570809 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:49.570822 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:49.570833 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:49.570844 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:49.570855 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:49.570866 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:49.570877 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:49.570888 | orchestrator | 2026-01-07 00:40:49.570899 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-07 00:40:49.570910 | orchestrator | Wednesday 07 January 2026 00:40:24 +0000 (0:00:00.684) 0:00:00.943 ***** 2026-01-07 00:40:49.570924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:40:49.570938 | orchestrator | 2026-01-07 00:40:49.570949 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-07 00:40:49.570960 | orchestrator | Wednesday 07 January 2026 00:40:25 +0000 (0:00:01.220) 0:00:02.163 ***** 2026-01-07 00:40:49.570971 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:49.570983 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:49.570995 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:49.571005 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:49.571016 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:49.571027 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:49.571070 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:49.571084 | orchestrator | 2026-01-07 00:40:49.571096 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-07 00:40:49.571110 | orchestrator | Wednesday 07 January 2026 00:40:28 +0000 (0:00:02.244) 0:00:04.407 ***** 2026-01-07 00:40:49.571122 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:49.571136 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:49.571148 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:49.571161 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:49.571173 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:49.571185 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:49.571198 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:49.571211 | orchestrator | 2026-01-07 00:40:49.571224 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-07 00:40:49.571236 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:01.220) 0:00:05.627 ***** 2026-01-07 00:40:49.571249 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:49.571262 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:49.571274 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:49.571287 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:49.571300 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:49.571313 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:49.571325 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:49.571370 | orchestrator | 2026-01-07 00:40:49.571382 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-07 00:40:49.571395 | orchestrator | Wednesday 07 January 2026 00:40:30 +0000 (0:00:01.167) 0:00:06.795 ***** 2026-01-07 00:40:49.571408 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:40:49.571419 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:40:49.571430 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:49.571441 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:40:49.571452 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:49.571462 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:49.571473 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:40:49.571483 | orchestrator | 2026-01-07 00:40:49.571494 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-07 00:40:49.571505 | orchestrator | Wednesday 07 January 2026 00:40:31 +0000 (0:00:00.841) 0:00:07.636 ***** 2026-01-07 00:40:49.571516 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:49.571526 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:49.571556 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:49.571568 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:49.571578 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:49.571589 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:49.571599 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:49.571610 | orchestrator | 2026-01-07 00:40:49.571621 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-07 00:40:49.571631 | orchestrator | Wednesday 07 January 2026 00:40:44 +0000 (0:00:13.445) 0:00:21.081 ***** 2026-01-07 00:40:49.571643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:40:49.571654 | orchestrator | 2026-01-07 00:40:49.571665 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-07 00:40:49.571676 | orchestrator | Wednesday 07 January 2026 00:40:46 +0000 (0:00:01.339) 0:00:22.421 ***** 2026-01-07 00:40:49.571687 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:49.571702 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:49.571720 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:49.571739 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:49.571770 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:49.571790 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:49.571808 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:49.571840 | orchestrator | 2026-01-07 00:40:49.571860 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:40:49.571881 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:49.571931 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571945 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571956 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571967 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571978 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571988 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:49.571999 | orchestrator | 2026-01-07 00:40:49.572009 | orchestrator | 2026-01-07 00:40:49.572020 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:40:49.572031 | orchestrator | Wednesday 07 January 2026 00:40:49 +0000 (0:00:02.959) 0:00:25.380 ***** 2026-01-07 00:40:49.572042 | orchestrator | =============================================================================== 2026-01-07 00:40:49.572053 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.45s 2026-01-07 00:40:49.572064 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.96s 2026-01-07 00:40:49.572075 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.24s 2026-01-07 00:40:49.572086 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.34s 2026-01-07 00:40:49.572096 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-01-07 00:40:49.572107 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2026-01-07 00:40:49.572118 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-01-07 00:40:49.572129 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-01-07 00:40:49.572140 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2026-01-07 00:40:49.897695 | orchestrator | ++ semver latest 7.1.1 2026-01-07 00:40:49.956372 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:40:49.956470 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:40:49.956486 | orchestrator | + sudo systemctl restart manager.service 2026-01-07 00:41:31.959685 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:41:31.959820 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:41:31.959837 | orchestrator | + local max_attempts=60 2026-01-07 00:41:31.960044 | orchestrator | + local name=ceph-ansible 2026-01-07 00:41:31.960056 | orchestrator | + local attempt_num=1 2026-01-07 00:41:31.960067 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:31.990667 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:31.990756 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:31.990769 | orchestrator | + sleep 5 2026-01-07 00:41:36.997050 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:37.019959 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:37.020058 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:37.020073 | orchestrator | + sleep 5 2026-01-07 00:41:42.023182 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:42.058067 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:42.058168 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:42.058179 | orchestrator | + sleep 5 2026-01-07 00:41:47.061700 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:47.093623 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:47.093710 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:47.093719 | orchestrator | + sleep 5 2026-01-07 00:41:52.098112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:52.124757 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:52.124854 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:52.124868 | orchestrator | + sleep 5 2026-01-07 00:41:57.130356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:57.170626 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:57.170748 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:57.170763 | orchestrator | + sleep 5 2026-01-07 00:42:02.176645 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:02.215478 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:02.215593 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:02.215609 | orchestrator | + sleep 5 2026-01-07 00:42:07.222985 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:07.258482 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:07.258612 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:07.258629 | orchestrator | + sleep 5 2026-01-07 00:42:12.260809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:12.293072 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:12.293178 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:12.293186 | orchestrator | + sleep 5 2026-01-07 00:42:17.296626 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:17.329577 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:17.329689 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:17.329700 | orchestrator | + sleep 5 2026-01-07 00:42:22.334880 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:22.371612 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:22.371712 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:22.371729 | orchestrator | + sleep 5 2026-01-07 00:42:27.375649 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:27.417523 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:27.418135 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:27.418167 | orchestrator | + sleep 5 2026-01-07 00:42:32.421703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:32.457572 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:32.457726 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:42:32.457738 | orchestrator | + sleep 5 2026-01-07 00:42:37.461404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:42:37.497778 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:37.497880 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:42:37.497898 | orchestrator | + local max_attempts=60 2026-01-07 00:42:37.497924 | orchestrator | + local name=kolla-ansible 2026-01-07 00:42:37.497936 | orchestrator | + local attempt_num=1 2026-01-07 00:42:37.498434 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:42:37.535030 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:37.535119 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:42:37.535133 | orchestrator | + local max_attempts=60 2026-01-07 00:42:37.535145 | orchestrator | + local name=osism-ansible 2026-01-07 00:42:37.535157 | orchestrator | + local attempt_num=1 2026-01-07 00:42:37.535712 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:42:37.578384 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:42:37.578466 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:42:37.578480 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:42:37.744585 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-07 00:42:37.908477 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-07 00:42:38.230932 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-07 00:42:38.231437 | orchestrator | + osism apply gather-facts 2026-01-07 00:42:50.378679 | orchestrator | 2026-01-07 00:42:50 | INFO  | Task 6863ce16-6916-48b9-8a9a-6a5b01455e85 (gather-facts) was prepared for execution. 2026-01-07 00:42:50.378824 | orchestrator | 2026-01-07 00:42:50 | INFO  | It takes a moment until task 6863ce16-6916-48b9-8a9a-6a5b01455e85 (gather-facts) has been started and output is visible here. 2026-01-07 00:43:04.419565 | orchestrator | 2026-01-07 00:43:04.419675 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:04.419690 | orchestrator | 2026-01-07 00:43:04.419699 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:43:04.419708 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.218) 0:00:00.218 ***** 2026-01-07 00:43:04.419716 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:43:04.419726 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:43:04.419734 | orchestrator | ok: [testbed-manager] 2026-01-07 00:43:04.419742 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:43:04.419750 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:04.419758 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:04.419766 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:04.419774 | orchestrator | 2026-01-07 00:43:04.419782 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:43:04.419790 | orchestrator | 2026-01-07 00:43:04.419798 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:04.419806 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:08.681) 0:00:08.899 ***** 2026-01-07 00:43:04.419814 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:43:04.419823 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:43:04.419831 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:43:04.419839 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:43:04.419847 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:04.419855 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:04.419863 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:04.419871 | orchestrator | 2026-01-07 00:43:04.419879 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:43:04.419887 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419896 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419904 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419913 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419921 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419929 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419937 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:43:04.419945 | orchestrator | 2026-01-07 00:43:04.419953 | orchestrator | 2026-01-07 00:43:04.419961 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:43:04.419970 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.561) 0:00:09.460 ***** 2026-01-07 00:43:04.419978 | orchestrator | =============================================================================== 2026-01-07 00:43:04.419986 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.68s 2026-01-07 00:43:04.419994 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-07 00:43:04.741478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-07 00:43:04.755542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-07 00:43:04.767878 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-07 00:43:04.785569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-07 00:43:04.800012 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-07 00:43:04.822487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-07 00:43:04.836704 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-07 00:43:04.848594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-07 00:43:04.858569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-07 00:43:04.868589 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-07 00:43:04.879883 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-07 00:43:04.890743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-07 00:43:04.906786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-07 00:43:04.929252 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-07 00:43:04.940668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-07 00:43:04.957120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-07 00:43:04.974898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-07 00:43:04.994645 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-07 00:43:05.015532 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-07 00:43:05.032922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-07 00:43:05.051990 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-07 00:43:05.162479 | orchestrator | ok: Runtime: 0:25:28.419131 2026-01-07 00:43:05.261265 | 2026-01-07 00:43:05.261461 | TASK [Deploy services] 2026-01-07 00:43:05.794789 | orchestrator | skipping: Conditional result was False 2026-01-07 00:43:05.813686 | 2026-01-07 00:43:05.813874 | TASK [Deploy in a nutshell] 2026-01-07 00:43:06.555254 | orchestrator | + set -e 2026-01-07 00:43:06.555426 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:43:06.555437 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:43:06.555447 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:43:06.555453 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:43:06.555457 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:43:06.555463 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:43:06.555501 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:43:06.555514 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:43:06.555519 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:43:06.555534 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:43:06.555539 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:43:06.555546 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:43:06.555550 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:43:06.555559 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:43:06.555562 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:43:06.555569 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:43:06.555599 | orchestrator | 2026-01-07 00:43:06.555605 | orchestrator | # PULL IMAGES 2026-01-07 00:43:06.555608 | orchestrator | 2026-01-07 00:43:06.555612 | orchestrator | ++ export ARA=false 2026-01-07 00:43:06.555616 | orchestrator | ++ ARA=false 2026-01-07 00:43:06.555621 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:43:06.555625 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:43:06.555629 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:43:06.555633 | orchestrator | ++ TEMPEST=true 2026-01-07 00:43:06.555636 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:43:06.555640 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:43:06.555644 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:43:06.555648 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2026-01-07 00:43:06.555652 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:43:06.555656 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:43:06.555660 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:43:06.555664 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:43:06.555667 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:43:06.555671 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:43:06.555675 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:43:06.555684 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:43:06.555688 | orchestrator | + echo 2026-01-07 00:43:06.555692 | orchestrator | + echo '# PULL IMAGES' 2026-01-07 00:43:06.555696 | orchestrator | + echo 2026-01-07 00:43:06.556507 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:43:06.603652 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:43:06.603706 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:43:06.603712 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-07 00:43:08.529195 | orchestrator | 2026-01-07 00:43:08 | INFO  | Trying to run play pull-images in environment custom 2026-01-07 00:43:18.621750 | orchestrator | 2026-01-07 00:43:18 | INFO  | Task 0b28804c-6ef2-47b1-a4bc-29fe259cd001 (pull-images) was prepared for execution. 2026-01-07 00:43:18.621919 | orchestrator | 2026-01-07 00:43:18 | INFO  | Task 0b28804c-6ef2-47b1-a4bc-29fe259cd001 is running in background. No more output. Check ARA for logs. 2026-01-07 00:43:20.890452 | orchestrator | 2026-01-07 00:43:20 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-07 00:43:31.007434 | orchestrator | 2026-01-07 00:43:31 | INFO  | Task c7ad8b2b-a8c9-4814-81a9-988d74a15815 (wipe-partitions) was prepared for execution. 2026-01-07 00:43:31.007579 | orchestrator | 2026-01-07 00:43:31 | INFO  | It takes a moment until task c7ad8b2b-a8c9-4814-81a9-988d74a15815 (wipe-partitions) has been started and output is visible here. 2026-01-07 00:43:44.457999 | orchestrator | 2026-01-07 00:43:44.458311 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-07 00:43:44.458345 | orchestrator | 2026-01-07 00:43:44.458364 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-07 00:43:44.458391 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.175) 0:00:00.175 ***** 2026-01-07 00:43:44.458414 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:43:44.458436 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:43:44.458459 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:43:44.458483 | orchestrator | 2026-01-07 00:43:44.458510 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-07 00:43:44.458572 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.584) 0:00:00.760 ***** 2026-01-07 00:43:44.458592 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:44.458621 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:44.458658 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:44.458682 | orchestrator | 2026-01-07 00:43:44.458699 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-07 00:43:44.458720 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.374) 0:00:01.134 ***** 2026-01-07 00:43:44.458741 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:44.458761 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:44.458781 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:44.458801 | orchestrator | 2026-01-07 00:43:44.458821 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-07 00:43:44.458841 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.628) 0:00:01.763 ***** 2026-01-07 00:43:44.458860 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:44.458913 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:44.458948 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:44.458966 | orchestrator | 2026-01-07 00:43:44.458983 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-07 00:43:44.459001 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.252) 0:00:02.016 ***** 2026-01-07 00:43:44.459021 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:43:44.459047 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:43:44.459066 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:43:44.459086 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:43:44.459105 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:43:44.459123 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:43:44.459139 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:43:44.459182 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:43:44.459195 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:43:44.459206 | orchestrator | 2026-01-07 00:43:44.459217 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-07 00:43:44.459229 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:01.245) 0:00:03.261 ***** 2026-01-07 00:43:44.459241 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:43:44.459251 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:43:44.459262 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:43:44.459273 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:43:44.459284 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:43:44.459295 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:43:44.459306 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:43:44.459317 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:43:44.459328 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:43:44.459338 | orchestrator | 2026-01-07 00:43:44.459349 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-07 00:43:44.459360 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:01.563) 0:00:04.825 ***** 2026-01-07 00:43:44.459371 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:43:44.459382 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:43:44.459393 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:43:44.459404 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:43:44.459415 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:43:44.459436 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:43:44.459447 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:43:44.459472 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:43:44.459483 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:43:44.459494 | orchestrator | 2026-01-07 00:43:44.459505 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-07 00:43:44.459516 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:02.169) 0:00:06.994 ***** 2026-01-07 00:43:44.459526 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:43:44.459537 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:43:44.459548 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:43:44.459558 | orchestrator | 2026-01-07 00:43:44.459569 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-07 00:43:44.459580 | orchestrator | Wednesday 07 January 2026 00:43:43 +0000 (0:00:00.600) 0:00:07.595 ***** 2026-01-07 00:43:44.459591 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:43:44.459601 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:43:44.459612 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:43:44.459622 | orchestrator | 2026-01-07 00:43:44.459633 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:43:44.459647 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:44.459660 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:44.459708 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:44.459733 | orchestrator | 2026-01-07 00:43:44.459751 | orchestrator | 2026-01-07 00:43:44.459770 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:43:44.459788 | orchestrator | Wednesday 07 January 2026 00:43:44 +0000 (0:00:00.625) 0:00:08.220 ***** 2026-01-07 00:43:44.459806 | orchestrator | =============================================================================== 2026-01-07 00:43:44.459825 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2026-01-07 00:43:44.459843 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.56s 2026-01-07 00:43:44.459861 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-01-07 00:43:44.459873 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2026-01-07 00:43:44.459884 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-01-07 00:43:44.459894 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-01-07 00:43:44.459905 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-01-07 00:43:44.459916 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-01-07 00:43:44.459927 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-01-07 00:43:56.852302 | orchestrator | 2026-01-07 00:43:56 | INFO  | Task eea9b35a-f1ec-4bd5-83f7-563a533e703f (facts) was prepared for execution. 2026-01-07 00:43:56.852448 | orchestrator | 2026-01-07 00:43:56 | INFO  | It takes a moment until task eea9b35a-f1ec-4bd5-83f7-563a533e703f (facts) has been started and output is visible here. 2026-01-07 00:44:09.819467 | orchestrator | 2026-01-07 00:44:09.819651 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:44:09.819668 | orchestrator | 2026-01-07 00:44:09.819679 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:44:09.819690 | orchestrator | Wednesday 07 January 2026 00:44:00 +0000 (0:00:00.278) 0:00:00.278 ***** 2026-01-07 00:44:09.819700 | orchestrator | ok: [testbed-manager] 2026-01-07 00:44:09.819712 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:44:09.819722 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:44:09.819766 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:44:09.819776 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:09.819786 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:09.819796 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:09.819805 | orchestrator | 2026-01-07 00:44:09.819818 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:44:09.819828 | orchestrator | Wednesday 07 January 2026 00:44:01 +0000 (0:00:01.145) 0:00:01.424 ***** 2026-01-07 00:44:09.819837 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:44:09.819849 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:44:09.819858 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:44:09.819868 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:44:09.819877 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:09.819887 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:09.819896 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:09.819911 | orchestrator | 2026-01-07 00:44:09.819929 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:44:09.819947 | orchestrator | 2026-01-07 00:44:09.819962 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:44:09.819988 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:01.246) 0:00:02.671 ***** 2026-01-07 00:44:09.820009 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:44:09.820026 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:44:09.820042 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:44:09.820058 | orchestrator | ok: [testbed-manager] 2026-01-07 00:44:09.820074 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:09.820090 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:09.820106 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:09.820121 | orchestrator | 2026-01-07 00:44:09.820160 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:44:09.820178 | orchestrator | 2026-01-07 00:44:09.820192 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:44:09.820231 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:05.659) 0:00:08.330 ***** 2026-01-07 00:44:09.820246 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:44:09.820261 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:44:09.820275 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:44:09.820290 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:44:09.820308 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:09.820324 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:09.820341 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:09.820356 | orchestrator | 2026-01-07 00:44:09.820371 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:09.820389 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820407 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820424 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820439 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820449 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820459 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820468 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:09.820478 | orchestrator | 2026-01-07 00:44:09.820501 | orchestrator | 2026-01-07 00:44:09.820511 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:09.820520 | orchestrator | Wednesday 07 January 2026 00:44:09 +0000 (0:00:00.518) 0:00:08.849 ***** 2026-01-07 00:44:09.820530 | orchestrator | =============================================================================== 2026-01-07 00:44:09.820539 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.66s 2026-01-07 00:44:09.820549 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-01-07 00:44:09.820558 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-01-07 00:44:09.820568 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-07 00:44:12.311205 | orchestrator | 2026-01-07 00:44:12 | INFO  | Task 8db04393-0cb3-4747-8655-a76cfbc4d726 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-07 00:44:12.311353 | orchestrator | 2026-01-07 00:44:12 | INFO  | It takes a moment until task 8db04393-0cb3-4747-8655-a76cfbc4d726 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-07 00:44:23.606005 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:44:23.606283 | orchestrator | 2.16.14 2026-01-07 00:44:23.606299 | orchestrator | 2026-01-07 00:44:23.606312 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:44:23.606325 | orchestrator | 2026-01-07 00:44:23.606339 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:44:23.606351 | orchestrator | Wednesday 07 January 2026 00:44:16 +0000 (0:00:00.291) 0:00:00.291 ***** 2026-01-07 00:44:23.606363 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:23.606375 | orchestrator | 2026-01-07 00:44:23.606386 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:44:23.606397 | orchestrator | Wednesday 07 January 2026 00:44:16 +0000 (0:00:00.227) 0:00:00.519 ***** 2026-01-07 00:44:23.606407 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:23.606419 | orchestrator | 2026-01-07 00:44:23.606430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606441 | orchestrator | Wednesday 07 January 2026 00:44:17 +0000 (0:00:00.216) 0:00:00.736 ***** 2026-01-07 00:44:23.606452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:44:23.606463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:44:23.606474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:44:23.606485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:44:23.606499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:44:23.606512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:44:23.606524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:44:23.606537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:44:23.606550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:44:23.606563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:44:23.606585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:44:23.606598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:44:23.606610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:44:23.606623 | orchestrator | 2026-01-07 00:44:23.606635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606671 | orchestrator | Wednesday 07 January 2026 00:44:17 +0000 (0:00:00.459) 0:00:01.195 ***** 2026-01-07 00:44:23.606683 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606696 | orchestrator | 2026-01-07 00:44:23.606708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606721 | orchestrator | Wednesday 07 January 2026 00:44:17 +0000 (0:00:00.177) 0:00:01.373 ***** 2026-01-07 00:44:23.606733 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606745 | orchestrator | 2026-01-07 00:44:23.606757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606770 | orchestrator | Wednesday 07 January 2026 00:44:17 +0000 (0:00:00.208) 0:00:01.581 ***** 2026-01-07 00:44:23.606782 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606795 | orchestrator | 2026-01-07 00:44:23.606807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606825 | orchestrator | Wednesday 07 January 2026 00:44:18 +0000 (0:00:00.191) 0:00:01.772 ***** 2026-01-07 00:44:23.606837 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606850 | orchestrator | 2026-01-07 00:44:23.606862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606873 | orchestrator | Wednesday 07 January 2026 00:44:18 +0000 (0:00:00.179) 0:00:01.952 ***** 2026-01-07 00:44:23.606884 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606895 | orchestrator | 2026-01-07 00:44:23.606905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606916 | orchestrator | Wednesday 07 January 2026 00:44:18 +0000 (0:00:00.171) 0:00:02.124 ***** 2026-01-07 00:44:23.606927 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606937 | orchestrator | 2026-01-07 00:44:23.606948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.606958 | orchestrator | Wednesday 07 January 2026 00:44:18 +0000 (0:00:00.221) 0:00:02.345 ***** 2026-01-07 00:44:23.606969 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.606980 | orchestrator | 2026-01-07 00:44:23.606990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607001 | orchestrator | Wednesday 07 January 2026 00:44:18 +0000 (0:00:00.181) 0:00:02.526 ***** 2026-01-07 00:44:23.607012 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607022 | orchestrator | 2026-01-07 00:44:23.607033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607043 | orchestrator | Wednesday 07 January 2026 00:44:19 +0000 (0:00:00.194) 0:00:02.721 ***** 2026-01-07 00:44:23.607054 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990) 2026-01-07 00:44:23.607066 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990) 2026-01-07 00:44:23.607077 | orchestrator | 2026-01-07 00:44:23.607088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607118 | orchestrator | Wednesday 07 January 2026 00:44:19 +0000 (0:00:00.367) 0:00:03.089 ***** 2026-01-07 00:44:23.607149 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463) 2026-01-07 00:44:23.607161 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463) 2026-01-07 00:44:23.607171 | orchestrator | 2026-01-07 00:44:23.607182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607193 | orchestrator | Wednesday 07 January 2026 00:44:19 +0000 (0:00:00.554) 0:00:03.643 ***** 2026-01-07 00:44:23.607203 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e) 2026-01-07 00:44:23.607214 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e) 2026-01-07 00:44:23.607225 | orchestrator | 2026-01-07 00:44:23.607236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607253 | orchestrator | Wednesday 07 January 2026 00:44:20 +0000 (0:00:00.529) 0:00:04.172 ***** 2026-01-07 00:44:23.607264 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac) 2026-01-07 00:44:23.607275 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac) 2026-01-07 00:44:23.607286 | orchestrator | 2026-01-07 00:44:23.607297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:23.607308 | orchestrator | Wednesday 07 January 2026 00:44:21 +0000 (0:00:00.858) 0:00:05.031 ***** 2026-01-07 00:44:23.607318 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:44:23.607329 | orchestrator | 2026-01-07 00:44:23.607345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607356 | orchestrator | Wednesday 07 January 2026 00:44:21 +0000 (0:00:00.330) 0:00:05.362 ***** 2026-01-07 00:44:23.607367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:44:23.607378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:44:23.607388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:44:23.607399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:44:23.607409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:44:23.607420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:44:23.607430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:44:23.607441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:44:23.607452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:44:23.607462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:44:23.607473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:44:23.607483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:44:23.607494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:44:23.607505 | orchestrator | 2026-01-07 00:44:23.607515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607526 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.423) 0:00:05.785 ***** 2026-01-07 00:44:23.607536 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607547 | orchestrator | 2026-01-07 00:44:23.607558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607569 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.217) 0:00:06.003 ***** 2026-01-07 00:44:23.607579 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607590 | orchestrator | 2026-01-07 00:44:23.607600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607611 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.213) 0:00:06.217 ***** 2026-01-07 00:44:23.607622 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607633 | orchestrator | 2026-01-07 00:44:23.607643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607654 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.180) 0:00:06.398 ***** 2026-01-07 00:44:23.607665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607675 | orchestrator | 2026-01-07 00:44:23.607686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607697 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.196) 0:00:06.595 ***** 2026-01-07 00:44:23.607713 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607724 | orchestrator | 2026-01-07 00:44:23.607735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607746 | orchestrator | Wednesday 07 January 2026 00:44:23 +0000 (0:00:00.206) 0:00:06.801 ***** 2026-01-07 00:44:23.607756 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607767 | orchestrator | 2026-01-07 00:44:23.607778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:23.607788 | orchestrator | Wednesday 07 January 2026 00:44:23 +0000 (0:00:00.237) 0:00:07.039 ***** 2026-01-07 00:44:23.607799 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:23.607810 | orchestrator | 2026-01-07 00:44:23.607826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122361 | orchestrator | Wednesday 07 January 2026 00:44:23 +0000 (0:00:00.212) 0:00:07.251 ***** 2026-01-07 00:44:31.122483 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.122501 | orchestrator | 2026-01-07 00:44:31.122514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122526 | orchestrator | Wednesday 07 January 2026 00:44:23 +0000 (0:00:00.190) 0:00:07.442 ***** 2026-01-07 00:44:31.122539 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:44:31.122551 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:44:31.122562 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:44:31.122573 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:44:31.122584 | orchestrator | 2026-01-07 00:44:31.122595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122606 | orchestrator | Wednesday 07 January 2026 00:44:24 +0000 (0:00:01.056) 0:00:08.498 ***** 2026-01-07 00:44:31.122626 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.122645 | orchestrator | 2026-01-07 00:44:31.122664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122684 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.192) 0:00:08.691 ***** 2026-01-07 00:44:31.122703 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.122723 | orchestrator | 2026-01-07 00:44:31.122742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122762 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.207) 0:00:08.898 ***** 2026-01-07 00:44:31.122780 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.122797 | orchestrator | 2026-01-07 00:44:31.122816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:31.122836 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.203) 0:00:09.102 ***** 2026-01-07 00:44:31.122854 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.122873 | orchestrator | 2026-01-07 00:44:31.122893 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:44:31.122914 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.189) 0:00:09.291 ***** 2026-01-07 00:44:31.122935 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:44:31.122954 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:44:31.122973 | orchestrator | 2026-01-07 00:44:31.123019 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:44:31.123040 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.169) 0:00:09.461 ***** 2026-01-07 00:44:31.123060 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123081 | orchestrator | 2026-01-07 00:44:31.123102 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:44:31.123149 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.148) 0:00:09.609 ***** 2026-01-07 00:44:31.123170 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123190 | orchestrator | 2026-01-07 00:44:31.123208 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:44:31.123251 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:00.132) 0:00:09.741 ***** 2026-01-07 00:44:31.123263 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123274 | orchestrator | 2026-01-07 00:44:31.123285 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:44:31.123296 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:00.148) 0:00:09.890 ***** 2026-01-07 00:44:31.123314 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:31.123332 | orchestrator | 2026-01-07 00:44:31.123349 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:44:31.123369 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:00.132) 0:00:10.023 ***** 2026-01-07 00:44:31.123386 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}}) 2026-01-07 00:44:31.123399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35426297-011a-51b2-a2d6-4f3d2a544c0e'}}) 2026-01-07 00:44:31.123410 | orchestrator | 2026-01-07 00:44:31.123421 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:44:31.123432 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:00.175) 0:00:10.198 ***** 2026-01-07 00:44:31.123444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}})  2026-01-07 00:44:31.123464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35426297-011a-51b2-a2d6-4f3d2a544c0e'}})  2026-01-07 00:44:31.123475 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123486 | orchestrator | 2026-01-07 00:44:31.123497 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:44:31.123508 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:00.144) 0:00:10.343 ***** 2026-01-07 00:44:31.123518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}})  2026-01-07 00:44:31.123529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35426297-011a-51b2-a2d6-4f3d2a544c0e'}})  2026-01-07 00:44:31.123540 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123551 | orchestrator | 2026-01-07 00:44:31.123562 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:44:31.123573 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.344) 0:00:10.688 ***** 2026-01-07 00:44:31.123583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}})  2026-01-07 00:44:31.123619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35426297-011a-51b2-a2d6-4f3d2a544c0e'}})  2026-01-07 00:44:31.123631 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123642 | orchestrator | 2026-01-07 00:44:31.123652 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:44:31.123670 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.150) 0:00:10.838 ***** 2026-01-07 00:44:31.123685 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:31.123703 | orchestrator | 2026-01-07 00:44:31.123722 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:44:31.123739 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.142) 0:00:10.981 ***** 2026-01-07 00:44:31.123758 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:44:31.123770 | orchestrator | 2026-01-07 00:44:31.123781 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:44:31.123791 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.135) 0:00:11.116 ***** 2026-01-07 00:44:31.123802 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123813 | orchestrator | 2026-01-07 00:44:31.123823 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:44:31.123834 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.146) 0:00:11.262 ***** 2026-01-07 00:44:31.123856 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123867 | orchestrator | 2026-01-07 00:44:31.123878 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:44:31.123889 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.150) 0:00:11.413 ***** 2026-01-07 00:44:31.123899 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.123910 | orchestrator | 2026-01-07 00:44:31.123921 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:44:31.123931 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.131) 0:00:11.545 ***** 2026-01-07 00:44:31.123942 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:44:31.123953 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:31.123965 | orchestrator |  "sdb": { 2026-01-07 00:44:31.123976 | orchestrator |  "osd_lvm_uuid": "ef56a04c-76f1-5b5f-91f5-fd927a7d00fc" 2026-01-07 00:44:31.123988 | orchestrator |  }, 2026-01-07 00:44:31.123999 | orchestrator |  "sdc": { 2026-01-07 00:44:31.124010 | orchestrator |  "osd_lvm_uuid": "35426297-011a-51b2-a2d6-4f3d2a544c0e" 2026-01-07 00:44:31.124021 | orchestrator |  } 2026-01-07 00:44:31.124032 | orchestrator |  } 2026-01-07 00:44:31.124043 | orchestrator | } 2026-01-07 00:44:31.124054 | orchestrator | 2026-01-07 00:44:31.124065 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:44:31.124076 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.136) 0:00:11.682 ***** 2026-01-07 00:44:31.124087 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.124097 | orchestrator | 2026-01-07 00:44:31.124110 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:44:31.124193 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.135) 0:00:11.818 ***** 2026-01-07 00:44:31.124211 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.124229 | orchestrator | 2026-01-07 00:44:31.124247 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:44:31.124264 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.137) 0:00:11.955 ***** 2026-01-07 00:44:31.124283 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:44:31.124300 | orchestrator | 2026-01-07 00:44:31.124317 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:44:31.124336 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.133) 0:00:12.089 ***** 2026-01-07 00:44:31.124351 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:44:31.124369 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:44:31.124388 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:31.124407 | orchestrator |  "sdb": { 2026-01-07 00:44:31.124425 | orchestrator |  "osd_lvm_uuid": "ef56a04c-76f1-5b5f-91f5-fd927a7d00fc" 2026-01-07 00:44:31.124444 | orchestrator |  }, 2026-01-07 00:44:31.124465 | orchestrator |  "sdc": { 2026-01-07 00:44:31.124483 | orchestrator |  "osd_lvm_uuid": "35426297-011a-51b2-a2d6-4f3d2a544c0e" 2026-01-07 00:44:31.124501 | orchestrator |  } 2026-01-07 00:44:31.124513 | orchestrator |  }, 2026-01-07 00:44:31.124524 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:44:31.124535 | orchestrator |  { 2026-01-07 00:44:31.124546 | orchestrator |  "data": "osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc", 2026-01-07 00:44:31.124557 | orchestrator |  "data_vg": "ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc" 2026-01-07 00:44:31.124568 | orchestrator |  }, 2026-01-07 00:44:31.124579 | orchestrator |  { 2026-01-07 00:44:31.124589 | orchestrator |  "data": "osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e", 2026-01-07 00:44:31.124600 | orchestrator |  "data_vg": "ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e" 2026-01-07 00:44:31.124621 | orchestrator |  } 2026-01-07 00:44:31.124632 | orchestrator |  ] 2026-01-07 00:44:31.124643 | orchestrator |  } 2026-01-07 00:44:31.124665 | orchestrator | } 2026-01-07 00:44:31.124675 | orchestrator | 2026-01-07 00:44:31.124686 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:44:31.124697 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.393) 0:00:12.482 ***** 2026-01-07 00:44:31.124708 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:31.124727 | orchestrator | 2026-01-07 00:44:31.124745 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:44:31.124764 | orchestrator | 2026-01-07 00:44:31.124781 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:44:31.124799 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:01.813) 0:00:14.296 ***** 2026-01-07 00:44:31.124818 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:31.124837 | orchestrator | 2026-01-07 00:44:31.124856 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:44:31.124874 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:00.241) 0:00:14.538 ***** 2026-01-07 00:44:31.124893 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:31.124912 | orchestrator | 2026-01-07 00:44:31.124947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885364 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.229) 0:00:14.767 ***** 2026-01-07 00:44:38.885454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:44:38.885462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:44:38.885466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:44:38.885470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:44:38.885474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:44:38.885479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:44:38.885482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:44:38.885487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:44:38.885491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:44:38.885495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:44:38.885499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:44:38.885506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:44:38.885510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:44:38.885514 | orchestrator | 2026-01-07 00:44:38.885518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885522 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.379) 0:00:15.146 ***** 2026-01-07 00:44:38.885526 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885532 | orchestrator | 2026-01-07 00:44:38.885535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885539 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.197) 0:00:15.344 ***** 2026-01-07 00:44:38.885543 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885547 | orchestrator | 2026-01-07 00:44:38.885551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885554 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.194) 0:00:15.538 ***** 2026-01-07 00:44:38.885558 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885562 | orchestrator | 2026-01-07 00:44:38.885566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885588 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:00.208) 0:00:15.747 ***** 2026-01-07 00:44:38.885593 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885596 | orchestrator | 2026-01-07 00:44:38.885600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885604 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:00.174) 0:00:15.921 ***** 2026-01-07 00:44:38.885608 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885612 | orchestrator | 2026-01-07 00:44:38.885616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885620 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:00.614) 0:00:16.536 ***** 2026-01-07 00:44:38.885623 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885627 | orchestrator | 2026-01-07 00:44:38.885644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885647 | orchestrator | Wednesday 07 January 2026 00:44:33 +0000 (0:00:00.208) 0:00:16.745 ***** 2026-01-07 00:44:38.885651 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885655 | orchestrator | 2026-01-07 00:44:38.885659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885663 | orchestrator | Wednesday 07 January 2026 00:44:33 +0000 (0:00:00.204) 0:00:16.949 ***** 2026-01-07 00:44:38.885666 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885670 | orchestrator | 2026-01-07 00:44:38.885674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885678 | orchestrator | Wednesday 07 January 2026 00:44:33 +0000 (0:00:00.191) 0:00:17.141 ***** 2026-01-07 00:44:38.885681 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad) 2026-01-07 00:44:38.885686 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad) 2026-01-07 00:44:38.885690 | orchestrator | 2026-01-07 00:44:38.885694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885698 | orchestrator | Wednesday 07 January 2026 00:44:33 +0000 (0:00:00.438) 0:00:17.579 ***** 2026-01-07 00:44:38.885702 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d) 2026-01-07 00:44:38.885705 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d) 2026-01-07 00:44:38.885709 | orchestrator | 2026-01-07 00:44:38.885713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885717 | orchestrator | Wednesday 07 January 2026 00:44:34 +0000 (0:00:00.495) 0:00:18.074 ***** 2026-01-07 00:44:38.885720 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843) 2026-01-07 00:44:38.885724 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843) 2026-01-07 00:44:38.885728 | orchestrator | 2026-01-07 00:44:38.885732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885747 | orchestrator | Wednesday 07 January 2026 00:44:34 +0000 (0:00:00.426) 0:00:18.501 ***** 2026-01-07 00:44:38.885752 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e) 2026-01-07 00:44:38.885755 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e) 2026-01-07 00:44:38.885760 | orchestrator | 2026-01-07 00:44:38.885763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:38.885767 | orchestrator | Wednesday 07 January 2026 00:44:35 +0000 (0:00:00.515) 0:00:19.016 ***** 2026-01-07 00:44:38.885771 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:44:38.885775 | orchestrator | 2026-01-07 00:44:38.885779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885782 | orchestrator | Wednesday 07 January 2026 00:44:35 +0000 (0:00:00.343) 0:00:19.360 ***** 2026-01-07 00:44:38.885790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:44:38.885794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:44:38.885798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:44:38.885802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:44:38.885805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:44:38.885809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:44:38.885813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:44:38.885817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:44:38.885821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:44:38.885824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:44:38.885828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:44:38.885832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:44:38.885836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:44:38.885839 | orchestrator | 2026-01-07 00:44:38.885843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885847 | orchestrator | Wednesday 07 January 2026 00:44:36 +0000 (0:00:00.391) 0:00:19.751 ***** 2026-01-07 00:44:38.885851 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885855 | orchestrator | 2026-01-07 00:44:38.885858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885866 | orchestrator | Wednesday 07 January 2026 00:44:36 +0000 (0:00:00.578) 0:00:20.329 ***** 2026-01-07 00:44:38.885870 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885874 | orchestrator | 2026-01-07 00:44:38.885879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885883 | orchestrator | Wednesday 07 January 2026 00:44:36 +0000 (0:00:00.189) 0:00:20.519 ***** 2026-01-07 00:44:38.885888 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885892 | orchestrator | 2026-01-07 00:44:38.885897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885902 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.188) 0:00:20.707 ***** 2026-01-07 00:44:38.885906 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885910 | orchestrator | 2026-01-07 00:44:38.885915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885919 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.174) 0:00:20.881 ***** 2026-01-07 00:44:38.885923 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885928 | orchestrator | 2026-01-07 00:44:38.885932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885937 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.159) 0:00:21.041 ***** 2026-01-07 00:44:38.885941 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885945 | orchestrator | 2026-01-07 00:44:38.885950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885954 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.179) 0:00:21.220 ***** 2026-01-07 00:44:38.885958 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885963 | orchestrator | 2026-01-07 00:44:38.885967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885971 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.164) 0:00:21.385 ***** 2026-01-07 00:44:38.885980 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:38.885984 | orchestrator | 2026-01-07 00:44:38.885988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.885993 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.165) 0:00:21.551 ***** 2026-01-07 00:44:38.885997 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:44:38.886003 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:44:38.886008 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:44:38.886012 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:44:38.886054 | orchestrator | 2026-01-07 00:44:38.886059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:38.886063 | orchestrator | Wednesday 07 January 2026 00:44:38 +0000 (0:00:00.804) 0:00:22.355 ***** 2026-01-07 00:44:38.886068 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.662964 | orchestrator | 2026-01-07 00:44:45.663067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:45.663080 | orchestrator | Wednesday 07 January 2026 00:44:38 +0000 (0:00:00.176) 0:00:22.532 ***** 2026-01-07 00:44:45.663089 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663099 | orchestrator | 2026-01-07 00:44:45.663106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:45.663154 | orchestrator | Wednesday 07 January 2026 00:44:39 +0000 (0:00:00.177) 0:00:22.710 ***** 2026-01-07 00:44:45.663162 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663170 | orchestrator | 2026-01-07 00:44:45.663178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:45.663186 | orchestrator | Wednesday 07 January 2026 00:44:39 +0000 (0:00:00.171) 0:00:22.881 ***** 2026-01-07 00:44:45.663194 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663202 | orchestrator | 2026-01-07 00:44:45.663210 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:44:45.663218 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.835) 0:00:23.717 ***** 2026-01-07 00:44:45.663227 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:44:45.663235 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:44:45.663242 | orchestrator | 2026-01-07 00:44:45.663250 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:44:45.663258 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.221) 0:00:23.938 ***** 2026-01-07 00:44:45.663265 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663274 | orchestrator | 2026-01-07 00:44:45.663281 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:44:45.663289 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.167) 0:00:24.106 ***** 2026-01-07 00:44:45.663297 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663304 | orchestrator | 2026-01-07 00:44:45.663312 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:44:45.663320 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.116) 0:00:24.223 ***** 2026-01-07 00:44:45.663328 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663335 | orchestrator | 2026-01-07 00:44:45.663343 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:44:45.663351 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.128) 0:00:24.352 ***** 2026-01-07 00:44:45.663358 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:45.663367 | orchestrator | 2026-01-07 00:44:45.663375 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:44:45.663382 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.129) 0:00:24.481 ***** 2026-01-07 00:44:45.663391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e6008a2-36a5-590e-8013-ca4c2218d3f7'}}) 2026-01-07 00:44:45.663399 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}}) 2026-01-07 00:44:45.663432 | orchestrator | 2026-01-07 00:44:45.663440 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:44:45.663447 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:00.171) 0:00:24.652 ***** 2026-01-07 00:44:45.663455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e6008a2-36a5-590e-8013-ca4c2218d3f7'}})  2026-01-07 00:44:45.663479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}})  2026-01-07 00:44:45.663488 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663495 | orchestrator | 2026-01-07 00:44:45.663503 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:44:45.663511 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:00.148) 0:00:24.800 ***** 2026-01-07 00:44:45.663519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e6008a2-36a5-590e-8013-ca4c2218d3f7'}})  2026-01-07 00:44:45.663527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}})  2026-01-07 00:44:45.663535 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663542 | orchestrator | 2026-01-07 00:44:45.663550 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:44:45.663558 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:00.145) 0:00:24.946 ***** 2026-01-07 00:44:45.663567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e6008a2-36a5-590e-8013-ca4c2218d3f7'}})  2026-01-07 00:44:45.663575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}})  2026-01-07 00:44:45.663582 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663590 | orchestrator | 2026-01-07 00:44:45.663598 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:44:45.663606 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:00.172) 0:00:25.119 ***** 2026-01-07 00:44:45.663614 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:45.663621 | orchestrator | 2026-01-07 00:44:45.663629 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:44:45.663638 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:00.126) 0:00:25.246 ***** 2026-01-07 00:44:45.663647 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:44:45.663656 | orchestrator | 2026-01-07 00:44:45.663663 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:44:45.663671 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:00.131) 0:00:25.377 ***** 2026-01-07 00:44:45.663696 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663705 | orchestrator | 2026-01-07 00:44:45.663713 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:44:45.663720 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.365) 0:00:25.742 ***** 2026-01-07 00:44:45.663728 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663735 | orchestrator | 2026-01-07 00:44:45.663742 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:44:45.663750 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.135) 0:00:25.878 ***** 2026-01-07 00:44:45.663757 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663764 | orchestrator | 2026-01-07 00:44:45.663772 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:44:45.663779 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.130) 0:00:26.009 ***** 2026-01-07 00:44:45.663786 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:44:45.663793 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:45.663800 | orchestrator |  "sdb": { 2026-01-07 00:44:45.663808 | orchestrator |  "osd_lvm_uuid": "4e6008a2-36a5-590e-8013-ca4c2218d3f7" 2026-01-07 00:44:45.663824 | orchestrator |  }, 2026-01-07 00:44:45.663833 | orchestrator |  "sdc": { 2026-01-07 00:44:45.663840 | orchestrator |  "osd_lvm_uuid": "16bf28f1-ae52-5ff4-8907-41e0bcdec1af" 2026-01-07 00:44:45.663848 | orchestrator |  } 2026-01-07 00:44:45.663856 | orchestrator |  } 2026-01-07 00:44:45.663863 | orchestrator | } 2026-01-07 00:44:45.663871 | orchestrator | 2026-01-07 00:44:45.663879 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:44:45.663886 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.163) 0:00:26.172 ***** 2026-01-07 00:44:45.663893 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663901 | orchestrator | 2026-01-07 00:44:45.663909 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:44:45.663916 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.148) 0:00:26.321 ***** 2026-01-07 00:44:45.663924 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663931 | orchestrator | 2026-01-07 00:44:45.663939 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:44:45.663946 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.122) 0:00:26.444 ***** 2026-01-07 00:44:45.663953 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:44:45.663960 | orchestrator | 2026-01-07 00:44:45.663967 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:44:45.663975 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:00.132) 0:00:26.576 ***** 2026-01-07 00:44:45.663982 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:44:45.663991 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:44:45.663999 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:45.664007 | orchestrator |  "sdb": { 2026-01-07 00:44:45.664014 | orchestrator |  "osd_lvm_uuid": "4e6008a2-36a5-590e-8013-ca4c2218d3f7" 2026-01-07 00:44:45.664022 | orchestrator |  }, 2026-01-07 00:44:45.664030 | orchestrator |  "sdc": { 2026-01-07 00:44:45.664037 | orchestrator |  "osd_lvm_uuid": "16bf28f1-ae52-5ff4-8907-41e0bcdec1af" 2026-01-07 00:44:45.664044 | orchestrator |  } 2026-01-07 00:44:45.664051 | orchestrator |  }, 2026-01-07 00:44:45.664060 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:44:45.664067 | orchestrator |  { 2026-01-07 00:44:45.664075 | orchestrator |  "data": "osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7", 2026-01-07 00:44:45.664082 | orchestrator |  "data_vg": "ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7" 2026-01-07 00:44:45.664090 | orchestrator |  }, 2026-01-07 00:44:45.664097 | orchestrator |  { 2026-01-07 00:44:45.664105 | orchestrator |  "data": "osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af", 2026-01-07 00:44:45.664154 | orchestrator |  "data_vg": "ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af" 2026-01-07 00:44:45.664163 | orchestrator |  } 2026-01-07 00:44:45.664171 | orchestrator |  ] 2026-01-07 00:44:45.664178 | orchestrator |  } 2026-01-07 00:44:45.664186 | orchestrator | } 2026-01-07 00:44:45.664194 | orchestrator | 2026-01-07 00:44:45.664199 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:44:45.664204 | orchestrator | Wednesday 07 January 2026 00:44:43 +0000 (0:00:00.219) 0:00:26.796 ***** 2026-01-07 00:44:45.664209 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:45.664214 | orchestrator | 2026-01-07 00:44:45.664218 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:44:45.664223 | orchestrator | 2026-01-07 00:44:45.664227 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:44:45.664232 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:01.105) 0:00:27.901 ***** 2026-01-07 00:44:45.664236 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:45.664241 | orchestrator | 2026-01-07 00:44:45.664246 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:44:45.664263 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:00.725) 0:00:28.627 ***** 2026-01-07 00:44:45.664271 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:45.664278 | orchestrator | 2026-01-07 00:44:45.664286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:45.664293 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:00.244) 0:00:28.871 ***** 2026-01-07 00:44:45.664301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:44:45.664308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:44:45.664316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:44:45.664322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:44:45.664330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:44:45.664346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:44:53.693907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:44:53.693998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:44:53.694008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:44:53.694070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:44:53.694079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:44:53.694086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:44:53.694094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:44:53.694102 | orchestrator | 2026-01-07 00:44:53.694136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694150 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:00.440) 0:00:29.312 ***** 2026-01-07 00:44:53.694163 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694179 | orchestrator | 2026-01-07 00:44:53.694192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694206 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:00.166) 0:00:29.479 ***** 2026-01-07 00:44:53.694214 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694222 | orchestrator | 2026-01-07 00:44:53.694229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694237 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:00.167) 0:00:29.646 ***** 2026-01-07 00:44:53.694244 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694251 | orchestrator | 2026-01-07 00:44:53.694259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694266 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.211) 0:00:29.858 ***** 2026-01-07 00:44:53.694273 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694280 | orchestrator | 2026-01-07 00:44:53.694287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694294 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.187) 0:00:30.045 ***** 2026-01-07 00:44:53.694302 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694309 | orchestrator | 2026-01-07 00:44:53.694316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694323 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.200) 0:00:30.245 ***** 2026-01-07 00:44:53.694330 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694342 | orchestrator | 2026-01-07 00:44:53.694354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694395 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.185) 0:00:30.431 ***** 2026-01-07 00:44:53.694408 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694421 | orchestrator | 2026-01-07 00:44:53.694429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694436 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.205) 0:00:30.637 ***** 2026-01-07 00:44:53.694443 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694452 | orchestrator | 2026-01-07 00:44:53.694461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694469 | orchestrator | Wednesday 07 January 2026 00:44:47 +0000 (0:00:00.193) 0:00:30.831 ***** 2026-01-07 00:44:53.694478 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f) 2026-01-07 00:44:53.694487 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f) 2026-01-07 00:44:53.694495 | orchestrator | 2026-01-07 00:44:53.694504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694512 | orchestrator | Wednesday 07 January 2026 00:44:47 +0000 (0:00:00.699) 0:00:31.530 ***** 2026-01-07 00:44:53.694520 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7) 2026-01-07 00:44:53.694529 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7) 2026-01-07 00:44:53.694537 | orchestrator | 2026-01-07 00:44:53.694545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694553 | orchestrator | Wednesday 07 January 2026 00:44:48 +0000 (0:00:00.411) 0:00:31.942 ***** 2026-01-07 00:44:53.694562 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616) 2026-01-07 00:44:53.694571 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616) 2026-01-07 00:44:53.694579 | orchestrator | 2026-01-07 00:44:53.694588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694597 | orchestrator | Wednesday 07 January 2026 00:44:48 +0000 (0:00:00.467) 0:00:32.409 ***** 2026-01-07 00:44:53.694606 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5) 2026-01-07 00:44:53.694614 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5) 2026-01-07 00:44:53.694622 | orchestrator | 2026-01-07 00:44:53.694630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:53.694639 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.498) 0:00:32.908 ***** 2026-01-07 00:44:53.694647 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:44:53.694655 | orchestrator | 2026-01-07 00:44:53.694664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694689 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.380) 0:00:33.288 ***** 2026-01-07 00:44:53.694699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:44:53.694707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:44:53.694716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:44:53.694724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:44:53.694733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:44:53.694758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:44:53.694767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:44:53.694776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:44:53.694792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:44:53.694801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:44:53.694808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:44:53.694815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:44:53.694822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:44:53.694830 | orchestrator | 2026-01-07 00:44:53.694837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694844 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.443) 0:00:33.732 ***** 2026-01-07 00:44:53.694851 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694858 | orchestrator | 2026-01-07 00:44:53.694865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694873 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.228) 0:00:33.961 ***** 2026-01-07 00:44:53.694880 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694887 | orchestrator | 2026-01-07 00:44:53.694894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694905 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.226) 0:00:34.187 ***** 2026-01-07 00:44:53.694913 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694920 | orchestrator | 2026-01-07 00:44:53.694927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694934 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.226) 0:00:34.413 ***** 2026-01-07 00:44:53.694941 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694949 | orchestrator | 2026-01-07 00:44:53.694956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694963 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.213) 0:00:34.626 ***** 2026-01-07 00:44:53.694970 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.694977 | orchestrator | 2026-01-07 00:44:53.694984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.694991 | orchestrator | Wednesday 07 January 2026 00:44:51 +0000 (0:00:00.184) 0:00:34.810 ***** 2026-01-07 00:44:53.694999 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695006 | orchestrator | 2026-01-07 00:44:53.695013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695020 | orchestrator | Wednesday 07 January 2026 00:44:51 +0000 (0:00:00.674) 0:00:35.485 ***** 2026-01-07 00:44:53.695027 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695034 | orchestrator | 2026-01-07 00:44:53.695041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695049 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.206) 0:00:35.691 ***** 2026-01-07 00:44:53.695056 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695063 | orchestrator | 2026-01-07 00:44:53.695070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695077 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.185) 0:00:35.877 ***** 2026-01-07 00:44:53.695084 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:44:53.695092 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:44:53.695100 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:44:53.695129 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:44:53.695137 | orchestrator | 2026-01-07 00:44:53.695145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695152 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.660) 0:00:36.538 ***** 2026-01-07 00:44:53.695159 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695172 | orchestrator | 2026-01-07 00:44:53.695180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695187 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.229) 0:00:36.767 ***** 2026-01-07 00:44:53.695194 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695201 | orchestrator | 2026-01-07 00:44:53.695209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695216 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.206) 0:00:36.974 ***** 2026-01-07 00:44:53.695223 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695230 | orchestrator | 2026-01-07 00:44:53.695237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:53.695245 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.184) 0:00:37.158 ***** 2026-01-07 00:44:53.695252 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:53.695259 | orchestrator | 2026-01-07 00:44:53.695272 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:44:58.460903 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.187) 0:00:37.345 ***** 2026-01-07 00:44:58.461018 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:44:58.461029 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:44:58.461042 | orchestrator | 2026-01-07 00:44:58.461056 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:44:58.461068 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.177) 0:00:37.523 ***** 2026-01-07 00:44:58.461079 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461092 | orchestrator | 2026-01-07 00:44:58.461128 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:44:58.461140 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:00.160) 0:00:37.684 ***** 2026-01-07 00:44:58.461152 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461163 | orchestrator | 2026-01-07 00:44:58.461175 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:44:58.461188 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:00.153) 0:00:37.837 ***** 2026-01-07 00:44:58.461200 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461212 | orchestrator | 2026-01-07 00:44:58.461223 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:44:58.461231 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:00.382) 0:00:38.219 ***** 2026-01-07 00:44:58.461238 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:58.461246 | orchestrator | 2026-01-07 00:44:58.461254 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:44:58.461262 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:00.128) 0:00:38.348 ***** 2026-01-07 00:44:58.461270 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd296ce-f103-5a39-9243-23354e346d82'}}) 2026-01-07 00:44:58.461278 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5711b466-e770-5253-91be-c96275afda22'}}) 2026-01-07 00:44:58.461286 | orchestrator | 2026-01-07 00:44:58.461293 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:44:58.461300 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:00.205) 0:00:38.554 ***** 2026-01-07 00:44:58.461308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd296ce-f103-5a39-9243-23354e346d82'}})  2026-01-07 00:44:58.461318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5711b466-e770-5253-91be-c96275afda22'}})  2026-01-07 00:44:58.461325 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461333 | orchestrator | 2026-01-07 00:44:58.461340 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:44:58.461347 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.157) 0:00:38.711 ***** 2026-01-07 00:44:58.461383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd296ce-f103-5a39-9243-23354e346d82'}})  2026-01-07 00:44:58.461394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5711b466-e770-5253-91be-c96275afda22'}})  2026-01-07 00:44:58.461407 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461420 | orchestrator | 2026-01-07 00:44:58.461433 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:44:58.461446 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.180) 0:00:38.892 ***** 2026-01-07 00:44:58.461477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd296ce-f103-5a39-9243-23354e346d82'}})  2026-01-07 00:44:58.461487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5711b466-e770-5253-91be-c96275afda22'}})  2026-01-07 00:44:58.461496 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461504 | orchestrator | 2026-01-07 00:44:58.461513 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:44:58.461522 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.169) 0:00:39.062 ***** 2026-01-07 00:44:58.461530 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:58.461537 | orchestrator | 2026-01-07 00:44:58.461544 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:44:58.461551 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.173) 0:00:39.235 ***** 2026-01-07 00:44:58.461558 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:58.461565 | orchestrator | 2026-01-07 00:44:58.461572 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:44:58.461580 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.129) 0:00:39.364 ***** 2026-01-07 00:44:58.461587 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461594 | orchestrator | 2026-01-07 00:44:58.461601 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:44:58.461609 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.123) 0:00:39.487 ***** 2026-01-07 00:44:58.461616 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461623 | orchestrator | 2026-01-07 00:44:58.461630 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:44:58.461637 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.140) 0:00:39.628 ***** 2026-01-07 00:44:58.461645 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461652 | orchestrator | 2026-01-07 00:44:58.461659 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:44:58.461666 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.135) 0:00:39.764 ***** 2026-01-07 00:44:58.461673 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:44:58.461681 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:58.461688 | orchestrator |  "sdb": { 2026-01-07 00:44:58.461715 | orchestrator |  "osd_lvm_uuid": "bbd296ce-f103-5a39-9243-23354e346d82" 2026-01-07 00:44:58.461723 | orchestrator |  }, 2026-01-07 00:44:58.461731 | orchestrator |  "sdc": { 2026-01-07 00:44:58.461738 | orchestrator |  "osd_lvm_uuid": "5711b466-e770-5253-91be-c96275afda22" 2026-01-07 00:44:58.461746 | orchestrator |  } 2026-01-07 00:44:58.461754 | orchestrator |  } 2026-01-07 00:44:58.461762 | orchestrator | } 2026-01-07 00:44:58.461769 | orchestrator | 2026-01-07 00:44:58.461776 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:44:58.461784 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.177) 0:00:39.942 ***** 2026-01-07 00:44:58.461791 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461798 | orchestrator | 2026-01-07 00:44:58.461805 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:44:58.461812 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.386) 0:00:40.328 ***** 2026-01-07 00:44:58.461828 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461835 | orchestrator | 2026-01-07 00:44:58.461842 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:44:58.461850 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.118) 0:00:40.446 ***** 2026-01-07 00:44:58.461857 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:58.461864 | orchestrator | 2026-01-07 00:44:58.461871 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:44:58.461878 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.150) 0:00:40.597 ***** 2026-01-07 00:44:58.461885 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:44:58.461893 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:44:58.461900 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:58.461908 | orchestrator |  "sdb": { 2026-01-07 00:44:58.461915 | orchestrator |  "osd_lvm_uuid": "bbd296ce-f103-5a39-9243-23354e346d82" 2026-01-07 00:44:58.461922 | orchestrator |  }, 2026-01-07 00:44:58.461930 | orchestrator |  "sdc": { 2026-01-07 00:44:58.461937 | orchestrator |  "osd_lvm_uuid": "5711b466-e770-5253-91be-c96275afda22" 2026-01-07 00:44:58.461944 | orchestrator |  } 2026-01-07 00:44:58.461951 | orchestrator |  }, 2026-01-07 00:44:58.461959 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:44:58.461966 | orchestrator |  { 2026-01-07 00:44:58.461973 | orchestrator |  "data": "osd-block-bbd296ce-f103-5a39-9243-23354e346d82", 2026-01-07 00:44:58.461981 | orchestrator |  "data_vg": "ceph-bbd296ce-f103-5a39-9243-23354e346d82" 2026-01-07 00:44:58.461988 | orchestrator |  }, 2026-01-07 00:44:58.461995 | orchestrator |  { 2026-01-07 00:44:58.462003 | orchestrator |  "data": "osd-block-5711b466-e770-5253-91be-c96275afda22", 2026-01-07 00:44:58.462010 | orchestrator |  "data_vg": "ceph-5711b466-e770-5253-91be-c96275afda22" 2026-01-07 00:44:58.462074 | orchestrator |  } 2026-01-07 00:44:58.462086 | orchestrator |  ] 2026-01-07 00:44:58.462094 | orchestrator |  } 2026-01-07 00:44:58.462161 | orchestrator | } 2026-01-07 00:44:58.462171 | orchestrator | 2026-01-07 00:44:58.462181 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:44:58.462194 | orchestrator | Wednesday 07 January 2026 00:44:57 +0000 (0:00:00.208) 0:00:40.806 ***** 2026-01-07 00:44:58.462207 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:58.462220 | orchestrator | 2026-01-07 00:44:58.462232 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:58.462244 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:58.462258 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:58.462271 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:58.462284 | orchestrator | 2026-01-07 00:44:58.462294 | orchestrator | 2026-01-07 00:44:58.462302 | orchestrator | 2026-01-07 00:44:58.462309 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:58.462316 | orchestrator | Wednesday 07 January 2026 00:44:58 +0000 (0:00:01.237) 0:00:42.043 ***** 2026-01-07 00:44:58.462324 | orchestrator | =============================================================================== 2026-01-07 00:44:58.462331 | orchestrator | Write configuration file ------------------------------------------------ 4.16s 2026-01-07 00:44:58.462342 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-01-07 00:44:58.462354 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-01-07 00:44:58.462366 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.20s 2026-01-07 00:44:58.462387 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-01-07 00:44:58.462396 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2026-01-07 00:44:58.462403 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-01-07 00:44:58.462410 | orchestrator | Print configuration data ------------------------------------------------ 0.82s 2026-01-07 00:44:58.462418 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-01-07 00:44:58.462425 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-01-07 00:44:58.462432 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-01-07 00:44:58.462439 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-07 00:44:58.462447 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.67s 2026-01-07 00:44:58.462462 | orchestrator | Print WAL devices ------------------------------------------------------- 0.67s 2026-01-07 00:44:58.799057 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-01-07 00:44:58.799237 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.66s 2026-01-07 00:44:58.799258 | orchestrator | Set DB devices config data ---------------------------------------------- 0.64s 2026-01-07 00:44:58.799274 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-07 00:44:58.799291 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-01-07 00:44:58.799306 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2026-01-07 00:45:21.381828 | orchestrator | 2026-01-07 00:45:21 | INFO  | Task 2b2732b2-d229-4017-bea5-50501d95f18f (sync inventory) is running in background. Output coming soon. 2026-01-07 00:45:50.881621 | orchestrator | 2026-01-07 00:45:22 | INFO  | Starting group_vars file reorganization 2026-01-07 00:45:50.881677 | orchestrator | 2026-01-07 00:45:22 | INFO  | Moved 0 file(s) to their respective directories 2026-01-07 00:45:50.881684 | orchestrator | 2026-01-07 00:45:22 | INFO  | Group_vars file reorganization completed 2026-01-07 00:45:50.881688 | orchestrator | 2026-01-07 00:45:25 | INFO  | Starting variable preparation from inventory 2026-01-07 00:45:50.881692 | orchestrator | 2026-01-07 00:45:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-07 00:45:50.881697 | orchestrator | 2026-01-07 00:45:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-07 00:45:50.881713 | orchestrator | 2026-01-07 00:45:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-07 00:45:50.881717 | orchestrator | 2026-01-07 00:45:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-07 00:45:50.881721 | orchestrator | 2026-01-07 00:45:28 | INFO  | Variable preparation completed 2026-01-07 00:45:50.881726 | orchestrator | 2026-01-07 00:45:30 | INFO  | Starting inventory overwrite handling 2026-01-07 00:45:50.881732 | orchestrator | 2026-01-07 00:45:30 | INFO  | Handling group overwrites in 99-overwrite 2026-01-07 00:45:50.881736 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removing group frr:children from 60-generic 2026-01-07 00:45:50.881740 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-07 00:45:50.881743 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-07 00:45:50.881748 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-07 00:45:50.881751 | orchestrator | 2026-01-07 00:45:30 | INFO  | Handling group overwrites in 20-roles 2026-01-07 00:45:50.881769 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-07 00:45:50.881773 | orchestrator | 2026-01-07 00:45:30 | INFO  | Removed 5 group(s) in total 2026-01-07 00:45:50.881777 | orchestrator | 2026-01-07 00:45:30 | INFO  | Inventory overwrite handling completed 2026-01-07 00:45:50.881781 | orchestrator | 2026-01-07 00:45:31 | INFO  | Starting merge of inventory files 2026-01-07 00:45:50.881785 | orchestrator | 2026-01-07 00:45:31 | INFO  | Inventory files merged successfully 2026-01-07 00:45:50.881789 | orchestrator | 2026-01-07 00:45:37 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-07 00:45:50.881792 | orchestrator | 2026-01-07 00:45:49 | INFO  | Successfully wrote ClusterShell configuration 2026-01-07 00:45:50.881796 | orchestrator | [master 85794d2] 2026-01-07-00-45 2026-01-07 00:45:50.881801 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-07 00:45:53.299071 | orchestrator | 2026-01-07 00:45:53 | INFO  | Task 9a03d0b7-84aa-4b62-a5b4-3269c22a19b4 (ceph-create-lvm-devices) was prepared for execution. 2026-01-07 00:45:53.299132 | orchestrator | 2026-01-07 00:45:53 | INFO  | It takes a moment until task 9a03d0b7-84aa-4b62-a5b4-3269c22a19b4 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-07 00:46:05.034162 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:46:05.034227 | orchestrator | 2.16.14 2026-01-07 00:46:05.034234 | orchestrator | 2026-01-07 00:46:05.034238 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:46:05.034243 | orchestrator | 2026-01-07 00:46:05.034247 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:46:05.034252 | orchestrator | Wednesday 07 January 2026 00:45:57 +0000 (0:00:00.308) 0:00:00.308 ***** 2026-01-07 00:46:05.034256 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:46:05.034261 | orchestrator | 2026-01-07 00:46:05.034265 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:46:05.034269 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.260) 0:00:00.568 ***** 2026-01-07 00:46:05.034273 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:05.034277 | orchestrator | 2026-01-07 00:46:05.034281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034285 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.229) 0:00:00.798 ***** 2026-01-07 00:46:05.034289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:46:05.034293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:46:05.034297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:46:05.034301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:46:05.034305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:46:05.034309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:46:05.034313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:46:05.034316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:46:05.034320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:46:05.034324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:46:05.034328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:46:05.034332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:46:05.034353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:46:05.034357 | orchestrator | 2026-01-07 00:46:05.034360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034364 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.531) 0:00:01.330 ***** 2026-01-07 00:46:05.034368 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034372 | orchestrator | 2026-01-07 00:46:05.034376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034380 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.199) 0:00:01.529 ***** 2026-01-07 00:46:05.034384 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034387 | orchestrator | 2026-01-07 00:46:05.034391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034395 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.199) 0:00:01.728 ***** 2026-01-07 00:46:05.034399 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034403 | orchestrator | 2026-01-07 00:46:05.034407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034411 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.176) 0:00:01.905 ***** 2026-01-07 00:46:05.034415 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034418 | orchestrator | 2026-01-07 00:46:05.034422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034426 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.198) 0:00:02.104 ***** 2026-01-07 00:46:05.034430 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034434 | orchestrator | 2026-01-07 00:46:05.034437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034441 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.232) 0:00:02.336 ***** 2026-01-07 00:46:05.034445 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034449 | orchestrator | 2026-01-07 00:46:05.034452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034456 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.194) 0:00:02.530 ***** 2026-01-07 00:46:05.034460 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034464 | orchestrator | 2026-01-07 00:46:05.034468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034471 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.226) 0:00:02.757 ***** 2026-01-07 00:46:05.034475 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034479 | orchestrator | 2026-01-07 00:46:05.034483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034487 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.186) 0:00:02.944 ***** 2026-01-07 00:46:05.034490 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990) 2026-01-07 00:46:05.034495 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990) 2026-01-07 00:46:05.034499 | orchestrator | 2026-01-07 00:46:05.034503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034516 | orchestrator | Wednesday 07 January 2026 00:46:01 +0000 (0:00:00.413) 0:00:03.357 ***** 2026-01-07 00:46:05.034521 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463) 2026-01-07 00:46:05.034525 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463) 2026-01-07 00:46:05.034528 | orchestrator | 2026-01-07 00:46:05.034532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034536 | orchestrator | Wednesday 07 January 2026 00:46:01 +0000 (0:00:00.617) 0:00:03.975 ***** 2026-01-07 00:46:05.034540 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e) 2026-01-07 00:46:05.034548 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e) 2026-01-07 00:46:05.034552 | orchestrator | 2026-01-07 00:46:05.034556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034560 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.512) 0:00:04.487 ***** 2026-01-07 00:46:05.034564 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac) 2026-01-07 00:46:05.034568 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac) 2026-01-07 00:46:05.034571 | orchestrator | 2026-01-07 00:46:05.034575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:05.034579 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.695) 0:00:05.182 ***** 2026-01-07 00:46:05.034583 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:46:05.034587 | orchestrator | 2026-01-07 00:46:05.034591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034595 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.298) 0:00:05.481 ***** 2026-01-07 00:46:05.034599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:46:05.034602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:46:05.034606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:46:05.034621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:46:05.034625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:46:05.034629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:46:05.034633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:46:05.034637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:46:05.034641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:46:05.034644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:46:05.034648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:46:05.034654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:46:05.034658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:46:05.034661 | orchestrator | 2026-01-07 00:46:05.034665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034669 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.366) 0:00:05.847 ***** 2026-01-07 00:46:05.034673 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034677 | orchestrator | 2026-01-07 00:46:05.034680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034684 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.168) 0:00:06.016 ***** 2026-01-07 00:46:05.034688 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034692 | orchestrator | 2026-01-07 00:46:05.034696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034699 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.194) 0:00:06.210 ***** 2026-01-07 00:46:05.034703 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034707 | orchestrator | 2026-01-07 00:46:05.034711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034715 | orchestrator | Wednesday 07 January 2026 00:46:04 +0000 (0:00:00.249) 0:00:06.460 ***** 2026-01-07 00:46:05.034718 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034724 | orchestrator | 2026-01-07 00:46:05.034728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034732 | orchestrator | Wednesday 07 January 2026 00:46:04 +0000 (0:00:00.236) 0:00:06.696 ***** 2026-01-07 00:46:05.034736 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034739 | orchestrator | 2026-01-07 00:46:05.034743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034747 | orchestrator | Wednesday 07 January 2026 00:46:04 +0000 (0:00:00.197) 0:00:06.894 ***** 2026-01-07 00:46:05.034751 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034755 | orchestrator | 2026-01-07 00:46:05.034758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:05.034762 | orchestrator | Wednesday 07 January 2026 00:46:04 +0000 (0:00:00.244) 0:00:07.138 ***** 2026-01-07 00:46:05.034766 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:05.034770 | orchestrator | 2026-01-07 00:46:05.034776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524264 | orchestrator | Wednesday 07 January 2026 00:46:05 +0000 (0:00:00.213) 0:00:07.352 ***** 2026-01-07 00:46:13.524353 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524366 | orchestrator | 2026-01-07 00:46:13.524374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524381 | orchestrator | Wednesday 07 January 2026 00:46:05 +0000 (0:00:00.210) 0:00:07.562 ***** 2026-01-07 00:46:13.524388 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:46:13.524395 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:46:13.524402 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:46:13.524409 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:46:13.524416 | orchestrator | 2026-01-07 00:46:13.524423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524430 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:01.202) 0:00:08.764 ***** 2026-01-07 00:46:13.524437 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524446 | orchestrator | 2026-01-07 00:46:13.524453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524460 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:00.216) 0:00:08.981 ***** 2026-01-07 00:46:13.524467 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524475 | orchestrator | 2026-01-07 00:46:13.524481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524489 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:00.193) 0:00:09.174 ***** 2026-01-07 00:46:13.524496 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524503 | orchestrator | 2026-01-07 00:46:13.524510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:13.524517 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.246) 0:00:09.420 ***** 2026-01-07 00:46:13.524524 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524530 | orchestrator | 2026-01-07 00:46:13.524538 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:46:13.524545 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.193) 0:00:09.614 ***** 2026-01-07 00:46:13.524552 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524558 | orchestrator | 2026-01-07 00:46:13.524564 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:46:13.524571 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.177) 0:00:09.792 ***** 2026-01-07 00:46:13.524577 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}}) 2026-01-07 00:46:13.524584 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35426297-011a-51b2-a2d6-4f3d2a544c0e'}}) 2026-01-07 00:46:13.524592 | orchestrator | 2026-01-07 00:46:13.524599 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:46:13.524628 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.206) 0:00:09.998 ***** 2026-01-07 00:46:13.524637 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}) 2026-01-07 00:46:13.524644 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'}) 2026-01-07 00:46:13.524650 | orchestrator | 2026-01-07 00:46:13.524657 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:46:13.524664 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:01.990) 0:00:11.988 ***** 2026-01-07 00:46:13.524670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.524679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.524686 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524693 | orchestrator | 2026-01-07 00:46:13.524700 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:46:13.524707 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:00.160) 0:00:12.149 ***** 2026-01-07 00:46:13.524714 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}) 2026-01-07 00:46:13.524721 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'}) 2026-01-07 00:46:13.524728 | orchestrator | 2026-01-07 00:46:13.524735 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:46:13.524742 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:01.640) 0:00:13.789 ***** 2026-01-07 00:46:13.524750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.524756 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.524763 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524770 | orchestrator | 2026-01-07 00:46:13.524777 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:46:13.524784 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.161) 0:00:13.951 ***** 2026-01-07 00:46:13.524806 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524814 | orchestrator | 2026-01-07 00:46:13.524820 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:46:13.524827 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.146) 0:00:14.097 ***** 2026-01-07 00:46:13.524834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.524840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.524847 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524854 | orchestrator | 2026-01-07 00:46:13.524861 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:46:13.524868 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.377) 0:00:14.475 ***** 2026-01-07 00:46:13.524874 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524880 | orchestrator | 2026-01-07 00:46:13.524887 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:46:13.524894 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.143) 0:00:14.618 ***** 2026-01-07 00:46:13.524910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.524918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.524925 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524931 | orchestrator | 2026-01-07 00:46:13.524937 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:46:13.524944 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.156) 0:00:14.775 ***** 2026-01-07 00:46:13.524951 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.524958 | orchestrator | 2026-01-07 00:46:13.524965 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:46:13.524972 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.136) 0:00:14.911 ***** 2026-01-07 00:46:13.524978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.524986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.524993 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.525000 | orchestrator | 2026-01-07 00:46:13.525007 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:46:13.525013 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.146) 0:00:15.057 ***** 2026-01-07 00:46:13.525020 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:13.525027 | orchestrator | 2026-01-07 00:46:13.525034 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:46:13.525117 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.143) 0:00:15.201 ***** 2026-01-07 00:46:13.525130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.525138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.525146 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.525153 | orchestrator | 2026-01-07 00:46:13.525160 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:46:13.525166 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.164) 0:00:15.366 ***** 2026-01-07 00:46:13.525173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.525180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.525187 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.525194 | orchestrator | 2026-01-07 00:46:13.525202 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:46:13.525209 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.173) 0:00:15.540 ***** 2026-01-07 00:46:13.525217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:13.525223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:13.525231 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.525239 | orchestrator | 2026-01-07 00:46:13.525247 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:46:13.525261 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.152) 0:00:15.692 ***** 2026-01-07 00:46:13.525268 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:13.525274 | orchestrator | 2026-01-07 00:46:13.525280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:46:13.525295 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.153) 0:00:15.846 ***** 2026-01-07 00:46:20.195493 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195542 | orchestrator | 2026-01-07 00:46:20.195547 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:46:20.195551 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.148) 0:00:15.994 ***** 2026-01-07 00:46:20.195554 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195558 | orchestrator | 2026-01-07 00:46:20.195561 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:46:20.195564 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.135) 0:00:16.129 ***** 2026-01-07 00:46:20.195568 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:46:20.195571 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:46:20.195575 | orchestrator | } 2026-01-07 00:46:20.195578 | orchestrator | 2026-01-07 00:46:20.195581 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:46:20.195584 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.355) 0:00:16.485 ***** 2026-01-07 00:46:20.195587 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:46:20.195590 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:46:20.195594 | orchestrator | } 2026-01-07 00:46:20.195597 | orchestrator | 2026-01-07 00:46:20.195600 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:46:20.195604 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.159) 0:00:16.644 ***** 2026-01-07 00:46:20.195609 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:46:20.195615 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:46:20.195618 | orchestrator | } 2026-01-07 00:46:20.195621 | orchestrator | 2026-01-07 00:46:20.195624 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:46:20.195627 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.144) 0:00:16.789 ***** 2026-01-07 00:46:20.195630 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:20.195634 | orchestrator | 2026-01-07 00:46:20.195637 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:46:20.195640 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.670) 0:00:17.459 ***** 2026-01-07 00:46:20.195643 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:20.195646 | orchestrator | 2026-01-07 00:46:20.195649 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:46:20.195652 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.489) 0:00:17.949 ***** 2026-01-07 00:46:20.195655 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:20.195658 | orchestrator | 2026-01-07 00:46:20.195662 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:46:20.195665 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.521) 0:00:18.470 ***** 2026-01-07 00:46:20.195668 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:20.195671 | orchestrator | 2026-01-07 00:46:20.195674 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:46:20.195677 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.160) 0:00:18.631 ***** 2026-01-07 00:46:20.195680 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195683 | orchestrator | 2026-01-07 00:46:20.195686 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:46:20.195690 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.125) 0:00:18.756 ***** 2026-01-07 00:46:20.195696 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195700 | orchestrator | 2026-01-07 00:46:20.195703 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:46:20.195718 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.115) 0:00:18.872 ***** 2026-01-07 00:46:20.195721 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:46:20.195725 | orchestrator |  "vgs_report": { 2026-01-07 00:46:20.195728 | orchestrator |  "vg": [] 2026-01-07 00:46:20.195731 | orchestrator |  } 2026-01-07 00:46:20.195735 | orchestrator | } 2026-01-07 00:46:20.195738 | orchestrator | 2026-01-07 00:46:20.195741 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:46:20.195744 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.157) 0:00:19.029 ***** 2026-01-07 00:46:20.195747 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195750 | orchestrator | 2026-01-07 00:46:20.195753 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:46:20.195757 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.166) 0:00:19.196 ***** 2026-01-07 00:46:20.195760 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195763 | orchestrator | 2026-01-07 00:46:20.195766 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:46:20.195769 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.151) 0:00:19.348 ***** 2026-01-07 00:46:20.195772 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195775 | orchestrator | 2026-01-07 00:46:20.195778 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:46:20.195782 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.351) 0:00:19.699 ***** 2026-01-07 00:46:20.195785 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195788 | orchestrator | 2026-01-07 00:46:20.195791 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:46:20.195794 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.168) 0:00:19.867 ***** 2026-01-07 00:46:20.195797 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195800 | orchestrator | 2026-01-07 00:46:20.195804 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:46:20.195807 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.158) 0:00:20.026 ***** 2026-01-07 00:46:20.195810 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195813 | orchestrator | 2026-01-07 00:46:20.195816 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:46:20.195819 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.145) 0:00:20.172 ***** 2026-01-07 00:46:20.195822 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195825 | orchestrator | 2026-01-07 00:46:20.195829 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:46:20.195832 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.154) 0:00:20.326 ***** 2026-01-07 00:46:20.195842 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195845 | orchestrator | 2026-01-07 00:46:20.195848 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:46:20.195851 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.178) 0:00:20.504 ***** 2026-01-07 00:46:20.195854 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195857 | orchestrator | 2026-01-07 00:46:20.195861 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:46:20.195864 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.161) 0:00:20.666 ***** 2026-01-07 00:46:20.195867 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195870 | orchestrator | 2026-01-07 00:46:20.195873 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:46:20.195876 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.128) 0:00:20.794 ***** 2026-01-07 00:46:20.195879 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195882 | orchestrator | 2026-01-07 00:46:20.195885 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:46:20.195888 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.143) 0:00:20.938 ***** 2026-01-07 00:46:20.195894 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195897 | orchestrator | 2026-01-07 00:46:20.195900 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:46:20.195903 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.132) 0:00:21.070 ***** 2026-01-07 00:46:20.195906 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.195939 | orchestrator | 2026-01-07 00:46:20.195943 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:46:20.195946 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.133) 0:00:21.204 ***** 2026-01-07 00:46:20.195949 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196002 | orchestrator | 2026-01-07 00:46:20.196007 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:46:20.196048 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.148) 0:00:21.352 ***** 2026-01-07 00:46:20.196052 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:20.196056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:20.196059 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196062 | orchestrator | 2026-01-07 00:46:20.196065 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:46:20.196068 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.370) 0:00:21.723 ***** 2026-01-07 00:46:20.196071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:20.196074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:20.196077 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196080 | orchestrator | 2026-01-07 00:46:20.196083 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:46:20.196087 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.169) 0:00:21.893 ***** 2026-01-07 00:46:20.196090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:20.196093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:20.196096 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196099 | orchestrator | 2026-01-07 00:46:20.196102 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:46:20.196105 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.143) 0:00:22.037 ***** 2026-01-07 00:46:20.196108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:20.196111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:20.196114 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196117 | orchestrator | 2026-01-07 00:46:20.196121 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:46:20.196124 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.155) 0:00:22.192 ***** 2026-01-07 00:46:20.196127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:20.196130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:20.196136 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:20.196139 | orchestrator | 2026-01-07 00:46:20.196142 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:46:20.196148 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.151) 0:00:22.344 ***** 2026-01-07 00:46:20.196154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805584 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805598 | orchestrator | 2026-01-07 00:46:25.805609 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:46:25.805620 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.177) 0:00:22.522 ***** 2026-01-07 00:46:25.805630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805651 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805657 | orchestrator | 2026-01-07 00:46:25.805663 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:46:25.805669 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.213) 0:00:22.735 ***** 2026-01-07 00:46:25.805675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805687 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805692 | orchestrator | 2026-01-07 00:46:25.805698 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:46:25.805704 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.174) 0:00:22.910 ***** 2026-01-07 00:46:25.805710 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:25.805716 | orchestrator | 2026-01-07 00:46:25.805722 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:46:25.805728 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.455) 0:00:23.366 ***** 2026-01-07 00:46:25.805733 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:25.805739 | orchestrator | 2026-01-07 00:46:25.805745 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:46:25.805750 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.536) 0:00:23.902 ***** 2026-01-07 00:46:25.805756 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:25.805762 | orchestrator | 2026-01-07 00:46:25.805767 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:46:25.805773 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.172) 0:00:24.074 ***** 2026-01-07 00:46:25.805779 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'vg_name': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'}) 2026-01-07 00:46:25.805795 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'vg_name': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}) 2026-01-07 00:46:25.805801 | orchestrator | 2026-01-07 00:46:25.805807 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:46:25.805813 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.186) 0:00:24.261 ***** 2026-01-07 00:46:25.805833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805844 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805850 | orchestrator | 2026-01-07 00:46:25.805856 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:46:25.805861 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.365) 0:00:24.626 ***** 2026-01-07 00:46:25.805867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805878 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805885 | orchestrator | 2026-01-07 00:46:25.805890 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:46:25.805896 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.167) 0:00:24.794 ***** 2026-01-07 00:46:25.805902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'})  2026-01-07 00:46:25.805907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'})  2026-01-07 00:46:25.805913 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:25.805919 | orchestrator | 2026-01-07 00:46:25.805924 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:46:25.805930 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.187) 0:00:24.982 ***** 2026-01-07 00:46:25.805947 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:46:25.805954 | orchestrator |  "lvm_report": { 2026-01-07 00:46:25.805960 | orchestrator |  "lv": [ 2026-01-07 00:46:25.805966 | orchestrator |  { 2026-01-07 00:46:25.805972 | orchestrator |  "lv_name": "osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e", 2026-01-07 00:46:25.805978 | orchestrator |  "vg_name": "ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e" 2026-01-07 00:46:25.805984 | orchestrator |  }, 2026-01-07 00:46:25.805990 | orchestrator |  { 2026-01-07 00:46:25.805996 | orchestrator |  "lv_name": "osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc", 2026-01-07 00:46:25.806002 | orchestrator |  "vg_name": "ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc" 2026-01-07 00:46:25.806007 | orchestrator |  } 2026-01-07 00:46:25.806062 | orchestrator |  ], 2026-01-07 00:46:25.806071 | orchestrator |  "pv": [ 2026-01-07 00:46:25.806077 | orchestrator |  { 2026-01-07 00:46:25.806083 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:46:25.806088 | orchestrator |  "vg_name": "ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc" 2026-01-07 00:46:25.806094 | orchestrator |  }, 2026-01-07 00:46:25.806100 | orchestrator |  { 2026-01-07 00:46:25.806106 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:46:25.806111 | orchestrator |  "vg_name": "ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e" 2026-01-07 00:46:25.806117 | orchestrator |  } 2026-01-07 00:46:25.806123 | orchestrator |  ] 2026-01-07 00:46:25.806129 | orchestrator |  } 2026-01-07 00:46:25.806135 | orchestrator | } 2026-01-07 00:46:25.806141 | orchestrator | 2026-01-07 00:46:25.806147 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:46:25.806152 | orchestrator | 2026-01-07 00:46:25.806158 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:46:25.806169 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.304) 0:00:25.286 ***** 2026-01-07 00:46:25.806175 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:46:25.806181 | orchestrator | 2026-01-07 00:46:25.806187 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:46:25.806193 | orchestrator | Wednesday 07 January 2026 00:46:23 +0000 (0:00:00.272) 0:00:25.558 ***** 2026-01-07 00:46:25.806198 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:25.806204 | orchestrator | 2026-01-07 00:46:25.806210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806216 | orchestrator | Wednesday 07 January 2026 00:46:23 +0000 (0:00:00.254) 0:00:25.813 ***** 2026-01-07 00:46:25.806221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:46:25.806227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:46:25.806233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:46:25.806238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:46:25.806244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:46:25.806250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:46:25.806259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:46:25.806265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:46:25.806270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:46:25.806276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:46:25.806282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:46:25.806287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:46:25.806293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:46:25.806298 | orchestrator | 2026-01-07 00:46:25.806304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806310 | orchestrator | Wednesday 07 January 2026 00:46:23 +0000 (0:00:00.461) 0:00:26.275 ***** 2026-01-07 00:46:25.806316 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806321 | orchestrator | 2026-01-07 00:46:25.806327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806333 | orchestrator | Wednesday 07 January 2026 00:46:24 +0000 (0:00:00.216) 0:00:26.492 ***** 2026-01-07 00:46:25.806338 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806344 | orchestrator | 2026-01-07 00:46:25.806354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806364 | orchestrator | Wednesday 07 January 2026 00:46:24 +0000 (0:00:00.232) 0:00:26.724 ***** 2026-01-07 00:46:25.806374 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806385 | orchestrator | 2026-01-07 00:46:25.806395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806406 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:00.710) 0:00:27.435 ***** 2026-01-07 00:46:25.806416 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806427 | orchestrator | 2026-01-07 00:46:25.806438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806449 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:00.262) 0:00:27.698 ***** 2026-01-07 00:46:25.806456 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806462 | orchestrator | 2026-01-07 00:46:25.806467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:25.806478 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:00.224) 0:00:27.922 ***** 2026-01-07 00:46:25.806484 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:25.806489 | orchestrator | 2026-01-07 00:46:25.806500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256538 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:00.209) 0:00:28.131 ***** 2026-01-07 00:46:37.256657 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.256669 | orchestrator | 2026-01-07 00:46:37.256677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256685 | orchestrator | Wednesday 07 January 2026 00:46:26 +0000 (0:00:00.211) 0:00:28.343 ***** 2026-01-07 00:46:37.256692 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.256699 | orchestrator | 2026-01-07 00:46:37.256706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256714 | orchestrator | Wednesday 07 January 2026 00:46:26 +0000 (0:00:00.193) 0:00:28.536 ***** 2026-01-07 00:46:37.256721 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad) 2026-01-07 00:46:37.256730 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad) 2026-01-07 00:46:37.256737 | orchestrator | 2026-01-07 00:46:37.256743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256749 | orchestrator | Wednesday 07 January 2026 00:46:26 +0000 (0:00:00.424) 0:00:28.961 ***** 2026-01-07 00:46:37.256755 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d) 2026-01-07 00:46:37.256762 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d) 2026-01-07 00:46:37.256768 | orchestrator | 2026-01-07 00:46:37.256774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256780 | orchestrator | Wednesday 07 January 2026 00:46:27 +0000 (0:00:00.482) 0:00:29.443 ***** 2026-01-07 00:46:37.256785 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843) 2026-01-07 00:46:37.256791 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843) 2026-01-07 00:46:37.256797 | orchestrator | 2026-01-07 00:46:37.256803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256808 | orchestrator | Wednesday 07 January 2026 00:46:27 +0000 (0:00:00.473) 0:00:29.917 ***** 2026-01-07 00:46:37.256814 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e) 2026-01-07 00:46:37.256820 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e) 2026-01-07 00:46:37.256826 | orchestrator | 2026-01-07 00:46:37.256832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:37.256838 | orchestrator | Wednesday 07 January 2026 00:46:28 +0000 (0:00:00.668) 0:00:30.585 ***** 2026-01-07 00:46:37.256843 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:46:37.256849 | orchestrator | 2026-01-07 00:46:37.256856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.256862 | orchestrator | Wednesday 07 January 2026 00:46:28 +0000 (0:00:00.573) 0:00:31.159 ***** 2026-01-07 00:46:37.256869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:46:37.256877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:46:37.256883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:46:37.256889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:46:37.256896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:46:37.256957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:46:37.256966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:46:37.256972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:46:37.256979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:46:37.256985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:46:37.256992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:46:37.256998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:46:37.257005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:46:37.257035 | orchestrator | 2026-01-07 00:46:37.257042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257049 | orchestrator | Wednesday 07 January 2026 00:46:29 +0000 (0:00:00.871) 0:00:32.031 ***** 2026-01-07 00:46:37.257055 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257061 | orchestrator | 2026-01-07 00:46:37.257069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257075 | orchestrator | Wednesday 07 January 2026 00:46:29 +0000 (0:00:00.219) 0:00:32.251 ***** 2026-01-07 00:46:37.257081 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257088 | orchestrator | 2026-01-07 00:46:37.257094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257101 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:00.229) 0:00:32.480 ***** 2026-01-07 00:46:37.257108 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257114 | orchestrator | 2026-01-07 00:46:37.257144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257151 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:00.200) 0:00:32.680 ***** 2026-01-07 00:46:37.257158 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257164 | orchestrator | 2026-01-07 00:46:37.257171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257178 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:00.198) 0:00:32.878 ***** 2026-01-07 00:46:37.257184 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257191 | orchestrator | 2026-01-07 00:46:37.257197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257204 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:00.214) 0:00:33.092 ***** 2026-01-07 00:46:37.257211 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257217 | orchestrator | 2026-01-07 00:46:37.257223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257230 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:00.228) 0:00:33.321 ***** 2026-01-07 00:46:37.257237 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257243 | orchestrator | 2026-01-07 00:46:37.257249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257255 | orchestrator | Wednesday 07 January 2026 00:46:31 +0000 (0:00:00.204) 0:00:33.525 ***** 2026-01-07 00:46:37.257261 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257268 | orchestrator | 2026-01-07 00:46:37.257274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257280 | orchestrator | Wednesday 07 January 2026 00:46:31 +0000 (0:00:00.204) 0:00:33.729 ***** 2026-01-07 00:46:37.257286 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:46:37.257293 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:46:37.257301 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:46:37.257307 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:46:37.257323 | orchestrator | 2026-01-07 00:46:37.257330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257337 | orchestrator | Wednesday 07 January 2026 00:46:32 +0000 (0:00:00.889) 0:00:34.619 ***** 2026-01-07 00:46:37.257343 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257349 | orchestrator | 2026-01-07 00:46:37.257355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257361 | orchestrator | Wednesday 07 January 2026 00:46:32 +0000 (0:00:00.196) 0:00:34.815 ***** 2026-01-07 00:46:37.257368 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257375 | orchestrator | 2026-01-07 00:46:37.257382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257388 | orchestrator | Wednesday 07 January 2026 00:46:33 +0000 (0:00:00.667) 0:00:35.482 ***** 2026-01-07 00:46:37.257395 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257402 | orchestrator | 2026-01-07 00:46:37.257409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:37.257416 | orchestrator | Wednesday 07 January 2026 00:46:33 +0000 (0:00:00.183) 0:00:35.666 ***** 2026-01-07 00:46:37.257423 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257430 | orchestrator | 2026-01-07 00:46:37.257437 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:46:37.257450 | orchestrator | Wednesday 07 January 2026 00:46:33 +0000 (0:00:00.204) 0:00:35.870 ***** 2026-01-07 00:46:37.257456 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257463 | orchestrator | 2026-01-07 00:46:37.257469 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:46:37.257475 | orchestrator | Wednesday 07 January 2026 00:46:33 +0000 (0:00:00.138) 0:00:36.008 ***** 2026-01-07 00:46:37.257482 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e6008a2-36a5-590e-8013-ca4c2218d3f7'}}) 2026-01-07 00:46:37.257488 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}}) 2026-01-07 00:46:37.257494 | orchestrator | 2026-01-07 00:46:37.257499 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:46:37.257505 | orchestrator | Wednesday 07 January 2026 00:46:33 +0000 (0:00:00.180) 0:00:36.189 ***** 2026-01-07 00:46:37.257513 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'}) 2026-01-07 00:46:37.257521 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}) 2026-01-07 00:46:37.257526 | orchestrator | 2026-01-07 00:46:37.257531 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:46:37.257537 | orchestrator | Wednesday 07 January 2026 00:46:35 +0000 (0:00:01.888) 0:00:38.077 ***** 2026-01-07 00:46:37.257543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:37.257551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:37.257556 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:37.257562 | orchestrator | 2026-01-07 00:46:37.257568 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:46:37.257573 | orchestrator | Wednesday 07 January 2026 00:46:35 +0000 (0:00:00.162) 0:00:38.240 ***** 2026-01-07 00:46:37.257579 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'}) 2026-01-07 00:46:37.257593 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}) 2026-01-07 00:46:42.958546 | orchestrator | 2026-01-07 00:46:42.958671 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:46:42.958685 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:01.341) 0:00:39.582 ***** 2026-01-07 00:46:42.958693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.958701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.958708 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958717 | orchestrator | 2026-01-07 00:46:42.958723 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:46:42.958729 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:00.166) 0:00:39.749 ***** 2026-01-07 00:46:42.958736 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958742 | orchestrator | 2026-01-07 00:46:42.958748 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:46:42.958755 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:00.154) 0:00:39.903 ***** 2026-01-07 00:46:42.958761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.958767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.958773 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958779 | orchestrator | 2026-01-07 00:46:42.958786 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:46:42.958791 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:00.163) 0:00:40.066 ***** 2026-01-07 00:46:42.958798 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958804 | orchestrator | 2026-01-07 00:46:42.958811 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:46:42.958817 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:00.130) 0:00:40.197 ***** 2026-01-07 00:46:42.958824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.958830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.958836 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958842 | orchestrator | 2026-01-07 00:46:42.958847 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:46:42.958875 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.379) 0:00:40.576 ***** 2026-01-07 00:46:42.958882 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958888 | orchestrator | 2026-01-07 00:46:42.958894 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:46:42.958901 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.134) 0:00:40.711 ***** 2026-01-07 00:46:42.958907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.958914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.958920 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.958925 | orchestrator | 2026-01-07 00:46:42.958944 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:46:42.958957 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.167) 0:00:40.879 ***** 2026-01-07 00:46:42.958963 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.958997 | orchestrator | 2026-01-07 00:46:42.959005 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:46:42.959064 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.149) 0:00:41.028 ***** 2026-01-07 00:46:42.959070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.959076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.959082 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959088 | orchestrator | 2026-01-07 00:46:42.959094 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:46:42.959100 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.149) 0:00:41.177 ***** 2026-01-07 00:46:42.959107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.959114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.959120 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959126 | orchestrator | 2026-01-07 00:46:42.959132 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:46:42.959162 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.152) 0:00:41.330 ***** 2026-01-07 00:46:42.959169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:42.959176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:42.959193 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959199 | orchestrator | 2026-01-07 00:46:42.959206 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:46:42.959212 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.158) 0:00:41.488 ***** 2026-01-07 00:46:42.959225 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959231 | orchestrator | 2026-01-07 00:46:42.959238 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:46:42.959243 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.145) 0:00:41.634 ***** 2026-01-07 00:46:42.959248 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959254 | orchestrator | 2026-01-07 00:46:42.959260 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:46:42.959265 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.116) 0:00:41.751 ***** 2026-01-07 00:46:42.959272 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959277 | orchestrator | 2026-01-07 00:46:42.959283 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:46:42.959289 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.140) 0:00:41.892 ***** 2026-01-07 00:46:42.959295 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:46:42.959301 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:46:42.959307 | orchestrator | } 2026-01-07 00:46:42.959314 | orchestrator | 2026-01-07 00:46:42.959319 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:46:42.959325 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.143) 0:00:42.035 ***** 2026-01-07 00:46:42.959331 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:46:42.959337 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:46:42.959343 | orchestrator | } 2026-01-07 00:46:42.959349 | orchestrator | 2026-01-07 00:46:42.959355 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:46:42.959361 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.141) 0:00:42.176 ***** 2026-01-07 00:46:42.959379 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:46:42.959387 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:46:42.959393 | orchestrator | } 2026-01-07 00:46:42.959400 | orchestrator | 2026-01-07 00:46:42.959406 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:46:42.959412 | orchestrator | Wednesday 07 January 2026 00:46:40 +0000 (0:00:00.361) 0:00:42.538 ***** 2026-01-07 00:46:42.959418 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.959425 | orchestrator | 2026-01-07 00:46:42.959431 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:46:42.959438 | orchestrator | Wednesday 07 January 2026 00:46:40 +0000 (0:00:00.559) 0:00:43.098 ***** 2026-01-07 00:46:42.959444 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.959450 | orchestrator | 2026-01-07 00:46:42.959457 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:46:42.959463 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:00.517) 0:00:43.615 ***** 2026-01-07 00:46:42.959470 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.959475 | orchestrator | 2026-01-07 00:46:42.959481 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:46:42.959488 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:00.543) 0:00:44.159 ***** 2026-01-07 00:46:42.959494 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.959500 | orchestrator | 2026-01-07 00:46:42.959506 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:46:42.959512 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:00.147) 0:00:44.307 ***** 2026-01-07 00:46:42.959518 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959524 | orchestrator | 2026-01-07 00:46:42.959540 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:46:42.959547 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.116) 0:00:44.423 ***** 2026-01-07 00:46:42.959553 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959556 | orchestrator | 2026-01-07 00:46:42.959560 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:46:42.959564 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.107) 0:00:44.530 ***** 2026-01-07 00:46:42.959568 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:46:42.959572 | orchestrator |  "vgs_report": { 2026-01-07 00:46:42.959576 | orchestrator |  "vg": [] 2026-01-07 00:46:42.959580 | orchestrator |  } 2026-01-07 00:46:42.959583 | orchestrator | } 2026-01-07 00:46:42.959587 | orchestrator | 2026-01-07 00:46:42.959591 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:46:42.959595 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.155) 0:00:44.685 ***** 2026-01-07 00:46:42.959598 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959602 | orchestrator | 2026-01-07 00:46:42.959606 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:46:42.959610 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.153) 0:00:44.839 ***** 2026-01-07 00:46:42.959613 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959617 | orchestrator | 2026-01-07 00:46:42.959621 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:46:42.959625 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.140) 0:00:44.980 ***** 2026-01-07 00:46:42.959629 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959632 | orchestrator | 2026-01-07 00:46:42.959636 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:46:42.959640 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.149) 0:00:45.129 ***** 2026-01-07 00:46:42.959644 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.959648 | orchestrator | 2026-01-07 00:46:42.959658 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:46:47.848880 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.154) 0:00:45.284 ***** 2026-01-07 00:46:47.848978 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.848990 | orchestrator | 2026-01-07 00:46:47.848999 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:46:47.849027 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.364) 0:00:45.648 ***** 2026-01-07 00:46:47.849036 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849044 | orchestrator | 2026-01-07 00:46:47.849052 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:46:47.849060 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.145) 0:00:45.794 ***** 2026-01-07 00:46:47.849068 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849077 | orchestrator | 2026-01-07 00:46:47.849086 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:46:47.849095 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.132) 0:00:45.927 ***** 2026-01-07 00:46:47.849104 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849112 | orchestrator | 2026-01-07 00:46:47.849121 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:46:47.849130 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.129) 0:00:46.056 ***** 2026-01-07 00:46:47.849138 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849147 | orchestrator | 2026-01-07 00:46:47.849156 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:46:47.849165 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.135) 0:00:46.192 ***** 2026-01-07 00:46:47.849173 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849182 | orchestrator | 2026-01-07 00:46:47.849191 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:46:47.849199 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.137) 0:00:46.329 ***** 2026-01-07 00:46:47.849208 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849216 | orchestrator | 2026-01-07 00:46:47.849225 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:46:47.849234 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.142) 0:00:46.472 ***** 2026-01-07 00:46:47.849243 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849251 | orchestrator | 2026-01-07 00:46:47.849260 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:46:47.849268 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.133) 0:00:46.605 ***** 2026-01-07 00:46:47.849277 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849286 | orchestrator | 2026-01-07 00:46:47.849294 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:46:47.849303 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.142) 0:00:46.748 ***** 2026-01-07 00:46:47.849312 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849320 | orchestrator | 2026-01-07 00:46:47.849330 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:46:47.849348 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.150) 0:00:46.898 ***** 2026-01-07 00:46:47.849358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849377 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849386 | orchestrator | 2026-01-07 00:46:47.849394 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:46:47.849403 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.150) 0:00:47.048 ***** 2026-01-07 00:46:47.849412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849437 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849446 | orchestrator | 2026-01-07 00:46:47.849454 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:46:47.849463 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.153) 0:00:47.201 ***** 2026-01-07 00:46:47.849472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849489 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849498 | orchestrator | 2026-01-07 00:46:47.849507 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:46:47.849516 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.382) 0:00:47.584 ***** 2026-01-07 00:46:47.849524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849542 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849551 | orchestrator | 2026-01-07 00:46:47.849574 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:46:47.849584 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.151) 0:00:47.735 ***** 2026-01-07 00:46:47.849592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849611 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849619 | orchestrator | 2026-01-07 00:46:47.849628 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:46:47.849637 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.181) 0:00:47.916 ***** 2026-01-07 00:46:47.849646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849664 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849673 | orchestrator | 2026-01-07 00:46:47.849681 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:46:47.849690 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.155) 0:00:48.072 ***** 2026-01-07 00:46:47.849699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849717 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849725 | orchestrator | 2026-01-07 00:46:47.849734 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:46:47.849743 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.174) 0:00:48.247 ***** 2026-01-07 00:46:47.849757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849780 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849789 | orchestrator | 2026-01-07 00:46:47.849798 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:46:47.849807 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:00.148) 0:00:48.395 ***** 2026-01-07 00:46:47.849815 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:47.849824 | orchestrator | 2026-01-07 00:46:47.849833 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:46:47.849842 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:00.542) 0:00:48.938 ***** 2026-01-07 00:46:47.849851 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:47.849860 | orchestrator | 2026-01-07 00:46:47.849868 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:46:47.849877 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.582) 0:00:49.520 ***** 2026-01-07 00:46:47.849886 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:47.849895 | orchestrator | 2026-01-07 00:46:47.849904 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:46:47.849913 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.146) 0:00:49.667 ***** 2026-01-07 00:46:47.849921 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'vg_name': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}) 2026-01-07 00:46:47.849931 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'vg_name': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'}) 2026-01-07 00:46:47.849940 | orchestrator | 2026-01-07 00:46:47.849949 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:46:47.849958 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.181) 0:00:49.849 ***** 2026-01-07 00:46:47.849967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.849975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:47.849984 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:47.849993 | orchestrator | 2026-01-07 00:46:47.850047 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:46:47.850057 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.172) 0:00:50.021 ***** 2026-01-07 00:46:47.850066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:47.850083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:54.169047 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:54.169193 | orchestrator | 2026-01-07 00:46:54.169213 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:46:54.169227 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.154) 0:00:50.176 ***** 2026-01-07 00:46:54.169242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'})  2026-01-07 00:46:54.169258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'})  2026-01-07 00:46:54.169270 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:54.169317 | orchestrator | 2026-01-07 00:46:54.169330 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:46:54.169344 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.164) 0:00:50.341 ***** 2026-01-07 00:46:54.169356 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:46:54.169369 | orchestrator |  "lvm_report": { 2026-01-07 00:46:54.169387 | orchestrator |  "lv": [ 2026-01-07 00:46:54.169399 | orchestrator |  { 2026-01-07 00:46:54.169407 | orchestrator |  "lv_name": "osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af", 2026-01-07 00:46:54.169415 | orchestrator |  "vg_name": "ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af" 2026-01-07 00:46:54.169423 | orchestrator |  }, 2026-01-07 00:46:54.169430 | orchestrator |  { 2026-01-07 00:46:54.169438 | orchestrator |  "lv_name": "osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7", 2026-01-07 00:46:54.169445 | orchestrator |  "vg_name": "ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7" 2026-01-07 00:46:54.169452 | orchestrator |  } 2026-01-07 00:46:54.169460 | orchestrator |  ], 2026-01-07 00:46:54.169467 | orchestrator |  "pv": [ 2026-01-07 00:46:54.169474 | orchestrator |  { 2026-01-07 00:46:54.169482 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:46:54.169489 | orchestrator |  "vg_name": "ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7" 2026-01-07 00:46:54.169496 | orchestrator |  }, 2026-01-07 00:46:54.169503 | orchestrator |  { 2026-01-07 00:46:54.169512 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:46:54.169520 | orchestrator |  "vg_name": "ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af" 2026-01-07 00:46:54.169529 | orchestrator |  } 2026-01-07 00:46:54.169537 | orchestrator |  ] 2026-01-07 00:46:54.169546 | orchestrator |  } 2026-01-07 00:46:54.169555 | orchestrator | } 2026-01-07 00:46:54.169563 | orchestrator | 2026-01-07 00:46:54.169571 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:46:54.169580 | orchestrator | 2026-01-07 00:46:54.169588 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:46:54.169597 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.532) 0:00:50.874 ***** 2026-01-07 00:46:54.169606 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:46:54.169615 | orchestrator | 2026-01-07 00:46:54.169625 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:46:54.169634 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.253) 0:00:51.127 ***** 2026-01-07 00:46:54.169642 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:54.169650 | orchestrator | 2026-01-07 00:46:54.169657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.169665 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.248) 0:00:51.375 ***** 2026-01-07 00:46:54.169672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:46:54.169679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:46:54.169687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:46:54.169694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:46:54.169701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:46:54.169708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:46:54.169715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:46:54.169723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:46:54.169730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:46:54.169744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:46:54.169752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:46:54.169759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:46:54.169766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:46:54.169773 | orchestrator | 2026-01-07 00:46:54.169785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.169793 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.408) 0:00:51.784 ***** 2026-01-07 00:46:54.169800 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.169807 | orchestrator | 2026-01-07 00:46:54.169815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.169822 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.198) 0:00:51.983 ***** 2026-01-07 00:46:54.169829 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.169836 | orchestrator | 2026-01-07 00:46:54.169843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.169875 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.247) 0:00:52.230 ***** 2026-01-07 00:46:54.169887 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.169899 | orchestrator | 2026-01-07 00:46:54.169910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.169922 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:00.192) 0:00:52.423 ***** 2026-01-07 00:46:54.169933 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.169944 | orchestrator | 2026-01-07 00:46:54.169956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170112 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:00.224) 0:00:52.648 ***** 2026-01-07 00:46:54.170128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.170135 | orchestrator | 2026-01-07 00:46:54.170143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170150 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.757) 0:00:53.406 ***** 2026-01-07 00:46:54.170157 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.170164 | orchestrator | 2026-01-07 00:46:54.170172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170179 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.176) 0:00:53.583 ***** 2026-01-07 00:46:54.170191 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.170204 | orchestrator | 2026-01-07 00:46:54.170216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170234 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.204) 0:00:53.787 ***** 2026-01-07 00:46:54.170248 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:54.170259 | orchestrator | 2026-01-07 00:46:54.170271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170283 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.199) 0:00:53.987 ***** 2026-01-07 00:46:54.170295 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f) 2026-01-07 00:46:54.170309 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f) 2026-01-07 00:46:54.170319 | orchestrator | 2026-01-07 00:46:54.170331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170343 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:00.435) 0:00:54.422 ***** 2026-01-07 00:46:54.170355 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7) 2026-01-07 00:46:54.170367 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7) 2026-01-07 00:46:54.170379 | orchestrator | 2026-01-07 00:46:54.170403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170422 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:00.425) 0:00:54.848 ***** 2026-01-07 00:46:54.170435 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616) 2026-01-07 00:46:54.170448 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616) 2026-01-07 00:46:54.170461 | orchestrator | 2026-01-07 00:46:54.170474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170487 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:00.444) 0:00:55.292 ***** 2026-01-07 00:46:54.170500 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5) 2026-01-07 00:46:54.170511 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5) 2026-01-07 00:46:54.170519 | orchestrator | 2026-01-07 00:46:54.170526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:46:54.170533 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:00.447) 0:00:55.739 ***** 2026-01-07 00:46:54.170540 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:46:54.170548 | orchestrator | 2026-01-07 00:46:54.170555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:54.170562 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:00.331) 0:00:56.071 ***** 2026-01-07 00:46:54.170569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:46:54.170576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:46:54.170583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:46:54.170590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:46:54.170597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:46:54.170604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:46:54.170612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:46:54.170619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:46:54.170626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:46:54.170633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:46:54.170640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:46:54.170659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:47:03.608446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:47:03.608563 | orchestrator | 2026-01-07 00:47:03.608575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608583 | orchestrator | Wednesday 07 January 2026 00:46:54 +0000 (0:00:00.414) 0:00:56.486 ***** 2026-01-07 00:47:03.608591 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608600 | orchestrator | 2026-01-07 00:47:03.608608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608615 | orchestrator | Wednesday 07 January 2026 00:46:54 +0000 (0:00:00.203) 0:00:56.689 ***** 2026-01-07 00:47:03.608622 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608630 | orchestrator | 2026-01-07 00:47:03.608637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608644 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:00.669) 0:00:57.358 ***** 2026-01-07 00:47:03.608681 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608689 | orchestrator | 2026-01-07 00:47:03.608696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608703 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:00.230) 0:00:57.589 ***** 2026-01-07 00:47:03.608711 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608718 | orchestrator | 2026-01-07 00:47:03.608725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608732 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:00.236) 0:00:57.826 ***** 2026-01-07 00:47:03.608740 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608747 | orchestrator | 2026-01-07 00:47:03.608754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608761 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:00.214) 0:00:58.040 ***** 2026-01-07 00:47:03.608768 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608775 | orchestrator | 2026-01-07 00:47:03.608783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608790 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:00.205) 0:00:58.246 ***** 2026-01-07 00:47:03.608797 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608804 | orchestrator | 2026-01-07 00:47:03.608812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608819 | orchestrator | Wednesday 07 January 2026 00:46:56 +0000 (0:00:00.198) 0:00:58.444 ***** 2026-01-07 00:47:03.608826 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608833 | orchestrator | 2026-01-07 00:47:03.608841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608848 | orchestrator | Wednesday 07 January 2026 00:46:56 +0000 (0:00:00.215) 0:00:58.659 ***** 2026-01-07 00:47:03.608870 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:47:03.608878 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:47:03.608886 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:47:03.608893 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:47:03.608901 | orchestrator | 2026-01-07 00:47:03.608908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608915 | orchestrator | Wednesday 07 January 2026 00:46:56 +0000 (0:00:00.670) 0:00:59.330 ***** 2026-01-07 00:47:03.608922 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608930 | orchestrator | 2026-01-07 00:47:03.608938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608947 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:00.231) 0:00:59.561 ***** 2026-01-07 00:47:03.608956 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.608965 | orchestrator | 2026-01-07 00:47:03.608973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.608981 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:00.260) 0:00:59.822 ***** 2026-01-07 00:47:03.609038 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609048 | orchestrator | 2026-01-07 00:47:03.609057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:47:03.609066 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:00.234) 0:01:00.056 ***** 2026-01-07 00:47:03.609073 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609080 | orchestrator | 2026-01-07 00:47:03.609087 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:47:03.609094 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:00.205) 0:01:00.262 ***** 2026-01-07 00:47:03.609102 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609109 | orchestrator | 2026-01-07 00:47:03.609116 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:47:03.609123 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:00.364) 0:01:00.627 ***** 2026-01-07 00:47:03.609130 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd296ce-f103-5a39-9243-23354e346d82'}}) 2026-01-07 00:47:03.609146 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5711b466-e770-5253-91be-c96275afda22'}}) 2026-01-07 00:47:03.609154 | orchestrator | 2026-01-07 00:47:03.609161 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:47:03.609168 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:00.217) 0:01:00.845 ***** 2026-01-07 00:47:03.609177 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'}) 2026-01-07 00:47:03.609186 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'}) 2026-01-07 00:47:03.609194 | orchestrator | 2026-01-07 00:47:03.609201 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:47:03.609225 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:01.945) 0:01:02.790 ***** 2026-01-07 00:47:03.609233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:03.609242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:03.609249 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609256 | orchestrator | 2026-01-07 00:47:03.609263 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:47:03.609271 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.189) 0:01:02.980 ***** 2026-01-07 00:47:03.609279 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'}) 2026-01-07 00:47:03.609286 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'}) 2026-01-07 00:47:03.609293 | orchestrator | 2026-01-07 00:47:03.609300 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:47:03.609307 | orchestrator | Wednesday 07 January 2026 00:47:01 +0000 (0:00:01.342) 0:01:04.323 ***** 2026-01-07 00:47:03.609315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:03.609322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:03.609329 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609336 | orchestrator | 2026-01-07 00:47:03.609344 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:47:03.609351 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.188) 0:01:04.512 ***** 2026-01-07 00:47:03.609358 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609365 | orchestrator | 2026-01-07 00:47:03.609372 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:47:03.609380 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.139) 0:01:04.651 ***** 2026-01-07 00:47:03.609391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:03.609399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:03.609406 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609413 | orchestrator | 2026-01-07 00:47:03.609421 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:47:03.609433 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.156) 0:01:04.808 ***** 2026-01-07 00:47:03.609440 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609447 | orchestrator | 2026-01-07 00:47:03.609455 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:47:03.609462 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.144) 0:01:04.953 ***** 2026-01-07 00:47:03.609469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:03.609477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:03.609484 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609491 | orchestrator | 2026-01-07 00:47:03.609498 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:47:03.609505 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.168) 0:01:05.121 ***** 2026-01-07 00:47:03.609524 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609531 | orchestrator | 2026-01-07 00:47:03.609547 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:47:03.609555 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:00.133) 0:01:05.255 ***** 2026-01-07 00:47:03.609562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:03.609569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:03.609577 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:03.609584 | orchestrator | 2026-01-07 00:47:03.609591 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:47:03.609599 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:00.145) 0:01:05.400 ***** 2026-01-07 00:47:03.609606 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:03.609614 | orchestrator | 2026-01-07 00:47:03.609621 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:47:03.609629 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:00.369) 0:01:05.770 ***** 2026-01-07 00:47:03.609642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:09.973250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:09.973399 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973421 | orchestrator | 2026-01-07 00:47:09.973435 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:47:09.973450 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:00.164) 0:01:05.934 ***** 2026-01-07 00:47:09.973465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:09.973480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:09.973493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973505 | orchestrator | 2026-01-07 00:47:09.973518 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:47:09.973531 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:00.186) 0:01:06.121 ***** 2026-01-07 00:47:09.973544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:09.973557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:09.973634 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973650 | orchestrator | 2026-01-07 00:47:09.973664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:47:09.973678 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:00.173) 0:01:06.294 ***** 2026-01-07 00:47:09.973691 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973704 | orchestrator | 2026-01-07 00:47:09.973720 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:47:09.973735 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.137) 0:01:06.432 ***** 2026-01-07 00:47:09.973750 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973765 | orchestrator | 2026-01-07 00:47:09.973780 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:47:09.973794 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.179) 0:01:06.612 ***** 2026-01-07 00:47:09.973809 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.973822 | orchestrator | 2026-01-07 00:47:09.973837 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:47:09.973853 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.157) 0:01:06.769 ***** 2026-01-07 00:47:09.973868 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:47:09.973885 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:47:09.973901 | orchestrator | } 2026-01-07 00:47:09.973917 | orchestrator | 2026-01-07 00:47:09.973932 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:47:09.973947 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.168) 0:01:06.937 ***** 2026-01-07 00:47:09.973962 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:47:09.973977 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:47:09.974011 | orchestrator | } 2026-01-07 00:47:09.974114 | orchestrator | 2026-01-07 00:47:09.974133 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:47:09.974148 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.148) 0:01:07.086 ***** 2026-01-07 00:47:09.974158 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:47:09.974167 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:47:09.974175 | orchestrator | } 2026-01-07 00:47:09.974184 | orchestrator | 2026-01-07 00:47:09.974193 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:47:09.974201 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.172) 0:01:07.258 ***** 2026-01-07 00:47:09.974210 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:09.974219 | orchestrator | 2026-01-07 00:47:09.974227 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:47:09.974236 | orchestrator | Wednesday 07 January 2026 00:47:05 +0000 (0:00:00.548) 0:01:07.807 ***** 2026-01-07 00:47:09.974244 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:09.974253 | orchestrator | 2026-01-07 00:47:09.974261 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:47:09.974270 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.549) 0:01:08.356 ***** 2026-01-07 00:47:09.974278 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:09.974287 | orchestrator | 2026-01-07 00:47:09.974295 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:47:09.974304 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.753) 0:01:09.110 ***** 2026-01-07 00:47:09.974312 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:09.974321 | orchestrator | 2026-01-07 00:47:09.974329 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:47:09.974338 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.164) 0:01:09.274 ***** 2026-01-07 00:47:09.974346 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974355 | orchestrator | 2026-01-07 00:47:09.974363 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:47:09.974384 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.103) 0:01:09.378 ***** 2026-01-07 00:47:09.974392 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974401 | orchestrator | 2026-01-07 00:47:09.974409 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:47:09.974442 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.118) 0:01:09.497 ***** 2026-01-07 00:47:09.974452 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:47:09.974461 | orchestrator |  "vgs_report": { 2026-01-07 00:47:09.974470 | orchestrator |  "vg": [] 2026-01-07 00:47:09.974500 | orchestrator |  } 2026-01-07 00:47:09.974510 | orchestrator | } 2026-01-07 00:47:09.974518 | orchestrator | 2026-01-07 00:47:09.974527 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:47:09.974536 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.143) 0:01:09.641 ***** 2026-01-07 00:47:09.974545 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974553 | orchestrator | 2026-01-07 00:47:09.974562 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:47:09.974571 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.127) 0:01:09.768 ***** 2026-01-07 00:47:09.974579 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974588 | orchestrator | 2026-01-07 00:47:09.974597 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:47:09.974605 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.131) 0:01:09.900 ***** 2026-01-07 00:47:09.974614 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974622 | orchestrator | 2026-01-07 00:47:09.974631 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:47:09.974640 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.124) 0:01:10.024 ***** 2026-01-07 00:47:09.974648 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974657 | orchestrator | 2026-01-07 00:47:09.974666 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:47:09.974674 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.144) 0:01:10.168 ***** 2026-01-07 00:47:09.974683 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974691 | orchestrator | 2026-01-07 00:47:09.974700 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:47:09.974708 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.134) 0:01:10.303 ***** 2026-01-07 00:47:09.974717 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974725 | orchestrator | 2026-01-07 00:47:09.974734 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:47:09.974743 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.123) 0:01:10.427 ***** 2026-01-07 00:47:09.974751 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974760 | orchestrator | 2026-01-07 00:47:09.974768 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:47:09.974777 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.157) 0:01:10.584 ***** 2026-01-07 00:47:09.974786 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974794 | orchestrator | 2026-01-07 00:47:09.974803 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:47:09.974811 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.353) 0:01:10.938 ***** 2026-01-07 00:47:09.974820 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974828 | orchestrator | 2026-01-07 00:47:09.974842 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:47:09.974851 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.166) 0:01:11.105 ***** 2026-01-07 00:47:09.974860 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974868 | orchestrator | 2026-01-07 00:47:09.974877 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:47:09.974892 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.136) 0:01:11.241 ***** 2026-01-07 00:47:09.974900 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974909 | orchestrator | 2026-01-07 00:47:09.974918 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:47:09.974926 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.148) 0:01:11.389 ***** 2026-01-07 00:47:09.974935 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974943 | orchestrator | 2026-01-07 00:47:09.974952 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:47:09.974961 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.140) 0:01:11.529 ***** 2026-01-07 00:47:09.974969 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.974978 | orchestrator | 2026-01-07 00:47:09.975014 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:47:09.975029 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.154) 0:01:11.684 ***** 2026-01-07 00:47:09.975044 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.975058 | orchestrator | 2026-01-07 00:47:09.975072 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:47:09.975087 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.126) 0:01:11.811 ***** 2026-01-07 00:47:09.975102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:09.975118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:09.975128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.975136 | orchestrator | 2026-01-07 00:47:09.975145 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:47:09.975153 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.167) 0:01:11.978 ***** 2026-01-07 00:47:09.975162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:09.975171 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:09.975179 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:09.975188 | orchestrator | 2026-01-07 00:47:09.975196 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:47:09.975205 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.168) 0:01:12.146 ***** 2026-01-07 00:47:09.975221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.105850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.105943 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.105954 | orchestrator | 2026-01-07 00:47:13.105963 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:47:13.105971 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.154) 0:01:12.301 ***** 2026-01-07 00:47:13.106000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106044 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106049 | orchestrator | 2026-01-07 00:47:13.106053 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:47:13.106076 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:00.154) 0:01:12.455 ***** 2026-01-07 00:47:13.106080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106088 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106092 | orchestrator | 2026-01-07 00:47:13.106096 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:47:13.106100 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:00.161) 0:01:12.617 ***** 2026-01-07 00:47:13.106104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106121 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106125 | orchestrator | 2026-01-07 00:47:13.106129 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:47:13.106132 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:00.354) 0:01:12.972 ***** 2026-01-07 00:47:13.106136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106144 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106148 | orchestrator | 2026-01-07 00:47:13.106152 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:47:13.106155 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:00.173) 0:01:13.145 ***** 2026-01-07 00:47:13.106159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106167 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106170 | orchestrator | 2026-01-07 00:47:13.106174 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:47:13.106178 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:00.161) 0:01:13.306 ***** 2026-01-07 00:47:13.106182 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:13.106187 | orchestrator | 2026-01-07 00:47:13.106190 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:47:13.106194 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:00.605) 0:01:13.912 ***** 2026-01-07 00:47:13.106198 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:13.106202 | orchestrator | 2026-01-07 00:47:13.106205 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:47:13.106209 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.538) 0:01:14.450 ***** 2026-01-07 00:47:13.106213 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:13.106217 | orchestrator | 2026-01-07 00:47:13.106220 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:47:13.106224 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.151) 0:01:14.601 ***** 2026-01-07 00:47:13.106228 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'vg_name': 'ceph-5711b466-e770-5253-91be-c96275afda22'}) 2026-01-07 00:47:13.106233 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'vg_name': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'}) 2026-01-07 00:47:13.106243 | orchestrator | 2026-01-07 00:47:13.106246 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:47:13.106250 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.201) 0:01:14.803 ***** 2026-01-07 00:47:13.106267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106275 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106279 | orchestrator | 2026-01-07 00:47:13.106282 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:47:13.106287 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.162) 0:01:14.966 ***** 2026-01-07 00:47:13.106291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106294 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106298 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106302 | orchestrator | 2026-01-07 00:47:13.106306 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:47:13.106310 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.161) 0:01:15.127 ***** 2026-01-07 00:47:13.106313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'})  2026-01-07 00:47:13.106317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'})  2026-01-07 00:47:13.106321 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:13.106325 | orchestrator | 2026-01-07 00:47:13.106329 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:47:13.106332 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.140) 0:01:15.268 ***** 2026-01-07 00:47:13.106337 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:47:13.106340 | orchestrator |  "lvm_report": { 2026-01-07 00:47:13.106345 | orchestrator |  "lv": [ 2026-01-07 00:47:13.106349 | orchestrator |  { 2026-01-07 00:47:13.106356 | orchestrator |  "lv_name": "osd-block-5711b466-e770-5253-91be-c96275afda22", 2026-01-07 00:47:13.106361 | orchestrator |  "vg_name": "ceph-5711b466-e770-5253-91be-c96275afda22" 2026-01-07 00:47:13.106364 | orchestrator |  }, 2026-01-07 00:47:13.106368 | orchestrator |  { 2026-01-07 00:47:13.106372 | orchestrator |  "lv_name": "osd-block-bbd296ce-f103-5a39-9243-23354e346d82", 2026-01-07 00:47:13.106376 | orchestrator |  "vg_name": "ceph-bbd296ce-f103-5a39-9243-23354e346d82" 2026-01-07 00:47:13.106380 | orchestrator |  } 2026-01-07 00:47:13.106383 | orchestrator |  ], 2026-01-07 00:47:13.106387 | orchestrator |  "pv": [ 2026-01-07 00:47:13.106392 | orchestrator |  { 2026-01-07 00:47:13.106396 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:47:13.106401 | orchestrator |  "vg_name": "ceph-bbd296ce-f103-5a39-9243-23354e346d82" 2026-01-07 00:47:13.106406 | orchestrator |  }, 2026-01-07 00:47:13.106410 | orchestrator |  { 2026-01-07 00:47:13.106414 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:47:13.106419 | orchestrator |  "vg_name": "ceph-5711b466-e770-5253-91be-c96275afda22" 2026-01-07 00:47:13.106423 | orchestrator |  } 2026-01-07 00:47:13.106427 | orchestrator |  ] 2026-01-07 00:47:13.106435 | orchestrator |  } 2026-01-07 00:47:13.106440 | orchestrator | } 2026-01-07 00:47:13.106445 | orchestrator | 2026-01-07 00:47:13.106449 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:13.106454 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:47:13.106458 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:47:13.106463 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:47:13.106467 | orchestrator | 2026-01-07 00:47:13.106471 | orchestrator | 2026-01-07 00:47:13.106476 | orchestrator | 2026-01-07 00:47:13.106480 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:13.106486 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.143) 0:01:15.411 ***** 2026-01-07 00:47:13.106493 | orchestrator | =============================================================================== 2026-01-07 00:47:13.106498 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2026-01-07 00:47:13.106505 | orchestrator | Create block LVs -------------------------------------------------------- 4.33s 2026-01-07 00:47:13.106511 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2026-01-07 00:47:13.106517 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2026-01-07 00:47:13.106524 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2026-01-07 00:47:13.106530 | orchestrator | Add known partitions to the list of available block devices ------------- 1.65s 2026-01-07 00:47:13.106534 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-01-07 00:47:13.106539 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-01-07 00:47:13.106548 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2026-01-07 00:47:13.572268 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-01-07 00:47:13.572357 | orchestrator | Print LVM report data --------------------------------------------------- 0.98s 2026-01-07 00:47:13.572368 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-01-07 00:47:13.572378 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-01-07 00:47:13.572388 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-07 00:47:13.572397 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2026-01-07 00:47:13.572407 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-01-07 00:47:13.572416 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.70s 2026-01-07 00:47:13.572426 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.70s 2026-01-07 00:47:13.572435 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2026-01-07 00:47:13.572443 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-01-07 00:47:26.143790 | orchestrator | 2026-01-07 00:47:26 | INFO  | Task cf13cf4e-e9f8-477e-9543-f1c5b33db061 (facts) was prepared for execution. 2026-01-07 00:47:26.143905 | orchestrator | 2026-01-07 00:47:26 | INFO  | It takes a moment until task cf13cf4e-e9f8-477e-9543-f1c5b33db061 (facts) has been started and output is visible here. 2026-01-07 00:47:38.794855 | orchestrator | 2026-01-07 00:47:38.794998 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:47:38.795010 | orchestrator | 2026-01-07 00:47:38.795014 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:47:38.795019 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.283) 0:00:00.283 ***** 2026-01-07 00:47:38.795045 | orchestrator | ok: [testbed-manager] 2026-01-07 00:47:38.795051 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:38.795055 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:38.795059 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:38.795063 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:47:38.795067 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:47:38.795071 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:38.795075 | orchestrator | 2026-01-07 00:47:38.795079 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:47:38.795085 | orchestrator | Wednesday 07 January 2026 00:47:31 +0000 (0:00:01.128) 0:00:01.411 ***** 2026-01-07 00:47:38.795089 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:47:38.795094 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:38.795098 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:38.795102 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:38.795106 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:38.795110 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:38.795114 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:38.795118 | orchestrator | 2026-01-07 00:47:38.795122 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:47:38.795126 | orchestrator | 2026-01-07 00:47:38.795130 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:47:38.795134 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:01.262) 0:00:02.674 ***** 2026-01-07 00:47:38.795138 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:38.795142 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:38.795145 | orchestrator | ok: [testbed-manager] 2026-01-07 00:47:38.795149 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:38.795153 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:47:38.795157 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:38.795161 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:47:38.795165 | orchestrator | 2026-01-07 00:47:38.795169 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:47:38.795172 | orchestrator | 2026-01-07 00:47:38.795176 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:47:38.795181 | orchestrator | Wednesday 07 January 2026 00:47:37 +0000 (0:00:04.904) 0:00:07.578 ***** 2026-01-07 00:47:38.795187 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:47:38.795194 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:38.795199 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:38.795205 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:38.795211 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:38.795216 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:38.795222 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:38.795227 | orchestrator | 2026-01-07 00:47:38.795233 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:38.795239 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795246 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795252 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795258 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795264 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795270 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795283 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:38.795290 | orchestrator | 2026-01-07 00:47:38.795296 | orchestrator | 2026-01-07 00:47:38.795302 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:38.795307 | orchestrator | Wednesday 07 January 2026 00:47:38 +0000 (0:00:00.511) 0:00:08.090 ***** 2026-01-07 00:47:38.795313 | orchestrator | =============================================================================== 2026-01-07 00:47:38.795319 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2026-01-07 00:47:38.795324 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2026-01-07 00:47:38.795327 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-01-07 00:47:38.795331 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-01-07 00:47:51.262525 | orchestrator | 2026-01-07 00:47:51 | INFO  | Task 14887cc5-6be5-4cee-b89e-c4596e5dfb3a (frr) was prepared for execution. 2026-01-07 00:47:51.262597 | orchestrator | 2026-01-07 00:47:51 | INFO  | It takes a moment until task 14887cc5-6be5-4cee-b89e-c4596e5dfb3a (frr) has been started and output is visible here. 2026-01-07 00:48:17.893816 | orchestrator | 2026-01-07 00:48:17.894115 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-07 00:48:17.894148 | orchestrator | 2026-01-07 00:48:17.894162 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-07 00:48:17.894204 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.235) 0:00:00.235 ***** 2026-01-07 00:48:17.894219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:48:17.894235 | orchestrator | 2026-01-07 00:48:17.894250 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-07 00:48:17.894264 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.216) 0:00:00.452 ***** 2026-01-07 00:48:17.894278 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:17.894293 | orchestrator | 2026-01-07 00:48:17.894308 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-07 00:48:17.894332 | orchestrator | Wednesday 07 January 2026 00:47:57 +0000 (0:00:01.238) 0:00:01.691 ***** 2026-01-07 00:48:17.894347 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:17.894361 | orchestrator | 2026-01-07 00:48:17.894375 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-07 00:48:17.894389 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:10.473) 0:00:12.164 ***** 2026-01-07 00:48:17.894404 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:17.894420 | orchestrator | 2026-01-07 00:48:17.894434 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-07 00:48:17.894448 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:01.031) 0:00:13.195 ***** 2026-01-07 00:48:17.894463 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:17.894477 | orchestrator | 2026-01-07 00:48:17.894491 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-07 00:48:17.894505 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.939) 0:00:14.134 ***** 2026-01-07 00:48:17.894519 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:17.894534 | orchestrator | 2026-01-07 00:48:17.894549 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-07 00:48:17.894565 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:01.229) 0:00:15.364 ***** 2026-01-07 00:48:17.894580 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:48:17.894595 | orchestrator | 2026-01-07 00:48:17.894609 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-07 00:48:17.894619 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:00.140) 0:00:15.504 ***** 2026-01-07 00:48:17.894653 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:48:17.894661 | orchestrator | 2026-01-07 00:48:17.894669 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-07 00:48:17.894677 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:00.156) 0:00:15.661 ***** 2026-01-07 00:48:17.894685 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:17.894692 | orchestrator | 2026-01-07 00:48:17.894700 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-07 00:48:17.894708 | orchestrator | Wednesday 07 January 2026 00:48:12 +0000 (0:00:01.027) 0:00:16.688 ***** 2026-01-07 00:48:17.894716 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-07 00:48:17.894724 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-07 00:48:17.894734 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-07 00:48:17.894742 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-07 00:48:17.894750 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-07 00:48:17.894758 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-07 00:48:17.894766 | orchestrator | 2026-01-07 00:48:17.894774 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-07 00:48:17.894782 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:02.298) 0:00:18.986 ***** 2026-01-07 00:48:17.894789 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:17.894797 | orchestrator | 2026-01-07 00:48:17.894805 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-07 00:48:17.894813 | orchestrator | Wednesday 07 January 2026 00:48:16 +0000 (0:00:01.672) 0:00:20.659 ***** 2026-01-07 00:48:17.894821 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:17.894829 | orchestrator | 2026-01-07 00:48:17.894837 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:48:17.894845 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:48:17.894854 | orchestrator | 2026-01-07 00:48:17.894864 | orchestrator | 2026-01-07 00:48:17.894877 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:48:17.894890 | orchestrator | Wednesday 07 January 2026 00:48:17 +0000 (0:00:01.423) 0:00:22.083 ***** 2026-01-07 00:48:17.894903 | orchestrator | =============================================================================== 2026-01-07 00:48:17.895090 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.47s 2026-01-07 00:48:17.895114 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.30s 2026-01-07 00:48:17.895123 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.67s 2026-01-07 00:48:17.895130 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2026-01-07 00:48:17.895139 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-01-07 00:48:17.895171 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.23s 2026-01-07 00:48:17.895180 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-01-07 00:48:17.895188 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-01-07 00:48:17.895195 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-01-07 00:48:17.895203 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-01-07 00:48:17.895211 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-07 00:48:17.895220 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-01-07 00:48:18.249334 | orchestrator | 2026-01-07 00:48:18.252743 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Jan 7 00:48:18 UTC 2026 2026-01-07 00:48:18.252824 | orchestrator | 2026-01-07 00:48:20.286420 | orchestrator | 2026-01-07 00:48:20 | INFO  | Collection nutshell is prepared for execution 2026-01-07 00:48:20.286574 | orchestrator | 2026-01-07 00:48:20 | INFO  | A [0] - dotfiles 2026-01-07 00:48:30.318845 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - homer 2026-01-07 00:48:30.319009 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - netdata 2026-01-07 00:48:30.319023 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - openstackclient 2026-01-07 00:48:30.319032 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - phpmyadmin 2026-01-07 00:48:30.319178 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - common 2026-01-07 00:48:30.323542 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- loadbalancer 2026-01-07 00:48:30.323664 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [2] --- opensearch 2026-01-07 00:48:30.324121 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [2] --- mariadb-ng 2026-01-07 00:48:30.324598 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [3] ---- horizon 2026-01-07 00:48:30.324984 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [3] ---- keystone 2026-01-07 00:48:30.325296 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- neutron 2026-01-07 00:48:30.325512 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ wait-for-nova 2026-01-07 00:48:30.326193 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [6] ------- octavia 2026-01-07 00:48:30.328239 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- barbican 2026-01-07 00:48:30.328389 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- designate 2026-01-07 00:48:30.328412 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- ironic 2026-01-07 00:48:30.328428 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- placement 2026-01-07 00:48:30.328454 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- magnum 2026-01-07 00:48:30.329203 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- openvswitch 2026-01-07 00:48:30.329244 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [2] --- ovn 2026-01-07 00:48:30.329840 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- memcached 2026-01-07 00:48:30.329920 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- redis 2026-01-07 00:48:30.330108 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- rabbitmq-ng 2026-01-07 00:48:30.330634 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - kubernetes 2026-01-07 00:48:30.333077 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- kubeconfig 2026-01-07 00:48:30.333117 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- copy-kubeconfig 2026-01-07 00:48:30.333647 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [0] - ceph 2026-01-07 00:48:30.336058 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [1] -- ceph-pools 2026-01-07 00:48:30.336530 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [2] --- copy-ceph-keys 2026-01-07 00:48:30.336559 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [3] ---- cephclient 2026-01-07 00:48:30.336568 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-07 00:48:30.336861 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- wait-for-keystone 2026-01-07 00:48:30.336876 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-07 00:48:30.337145 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ glance 2026-01-07 00:48:30.337287 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ cinder 2026-01-07 00:48:30.337457 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ nova 2026-01-07 00:48:30.338066 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [4] ----- prometheus 2026-01-07 00:48:30.338228 | orchestrator | 2026-01-07 00:48:30 | INFO  | A [5] ------ grafana 2026-01-07 00:48:30.571558 | orchestrator | 2026-01-07 00:48:30 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-07 00:48:30.572462 | orchestrator | 2026-01-07 00:48:30 | INFO  | Tasks are running in the background 2026-01-07 00:48:33.835245 | orchestrator | 2026-01-07 00:48:33 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-07 00:48:35.970234 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:35.970493 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:35.971334 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:35.973036 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:35.973863 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:35.985512 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:35.988543 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:35.990176 | orchestrator | 2026-01-07 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:39.144864 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:39.145018 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:39.145046 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:39.145055 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:39.145062 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:39.145069 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:39.145076 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:39.145084 | orchestrator | 2026-01-07 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:42.151727 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:42.152241 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:42.152779 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:42.155091 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:42.155584 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:42.156275 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:42.156787 | orchestrator | 2026-01-07 00:48:42 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:42.156846 | orchestrator | 2026-01-07 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:45.195055 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:45.198010 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:45.198488 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:45.207719 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:45.207798 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:45.207810 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:45.207818 | orchestrator | 2026-01-07 00:48:45 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:45.207826 | orchestrator | 2026-01-07 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:48.246845 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:48.248521 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:48.249428 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:48.250063 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:48.255387 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:48.256984 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:48.258824 | orchestrator | 2026-01-07 00:48:48 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:48.258897 | orchestrator | 2026-01-07 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:51.659835 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:51.660079 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:51.660097 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:51.660105 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:51.660112 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:51.660121 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:51.660128 | orchestrator | 2026-01-07 00:48:51 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:51.660137 | orchestrator | 2026-01-07 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:54.602457 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:54.701084 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:55.007346 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:55.014302 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:55.014376 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:55.014386 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:55.015191 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:55.015631 | orchestrator | 2026-01-07 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:58.067808 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state STARTED 2026-01-07 00:48:58.069060 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:48:58.086416 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:48:58.091463 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:48:58.097600 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:48:58.099057 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:48:58.101233 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:48:58.101722 | orchestrator | 2026-01-07 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:01.268331 | orchestrator | 2026-01-07 00:49:01.268411 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-07 00:49:01.268420 | orchestrator | 2026-01-07 00:49:01.268427 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-07 00:49:01.268434 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:00.699) 0:00:00.699 ***** 2026-01-07 00:49:01.268442 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:01.268449 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:01.268455 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:01.268462 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:01.268469 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:01.268475 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:01.268482 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:01.268488 | orchestrator | 2026-01-07 00:49:01.268495 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-07 00:49:01.268501 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:03.996) 0:00:04.695 ***** 2026-01-07 00:49:01.268509 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:49:01.268515 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:49:01.268522 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:49:01.268528 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:49:01.268534 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:49:01.268540 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:49:01.268546 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:49:01.268552 | orchestrator | 2026-01-07 00:49:01.268558 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-07 00:49:01.268565 | orchestrator | Wednesday 07 January 2026 00:48:52 +0000 (0:00:01.971) 0:00:06.667 ***** 2026-01-07 00:49:01.268574 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:50.948227', 'end': '2026-01-07 00:48:50.955987', 'delta': '0:00:00.007760', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268609 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:51.359520', 'end': '2026-01-07 00:48:51.367486', 'delta': '0:00:00.007966', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268617 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:51.737960', 'end': '2026-01-07 00:48:51.745896', 'delta': '0:00:00.007936', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268644 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:51.590456', 'end': '2026-01-07 00:48:51.598090', 'delta': '0:00:00.007634', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268805 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:52.091467', 'end': '2026-01-07 00:48:52.099294', 'delta': '0:00:00.007827', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268826 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:52.167191', 'end': '2026-01-07 00:48:52.173286', 'delta': '0:00:00.006095', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268833 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:48:51.025661', 'end': '2026-01-07 00:48:51.030559', 'delta': '0:00:00.004898', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:49:01.268837 | orchestrator | 2026-01-07 00:49:01.268841 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-07 00:49:01.268845 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:01.184) 0:00:07.852 ***** 2026-01-07 00:49:01.268849 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:49:01.268853 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:49:01.268857 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:49:01.268861 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:49:01.268882 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:49:01.268887 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:49:01.268892 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:49:01.268897 | orchestrator | 2026-01-07 00:49:01.268901 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-07 00:49:01.268906 | orchestrator | Wednesday 07 January 2026 00:48:55 +0000 (0:00:01.636) 0:00:09.488 ***** 2026-01-07 00:49:01.268911 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:49:01.268916 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:49:01.268920 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:49:01.268925 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:49:01.268929 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:49:01.268934 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:49:01.268938 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:49:01.268943 | orchestrator | 2026-01-07 00:49:01.268947 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:49:01.268958 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268964 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268969 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268978 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268983 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268987 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268992 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:01.268997 | orchestrator | 2026-01-07 00:49:01.269001 | orchestrator | 2026-01-07 00:49:01.269006 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:49:01.269010 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:03.694) 0:00:13.183 ***** 2026-01-07 00:49:01.269015 | orchestrator | =============================================================================== 2026-01-07 00:49:01.269019 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.00s 2026-01-07 00:49:01.269025 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.69s 2026-01-07 00:49:01.269029 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.97s 2026-01-07 00:49:01.269033 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.64s 2026-01-07 00:49:01.269038 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.18s 2026-01-07 00:49:01.269049 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task f4f7e5ea-327b-4bac-be58-de79b6db4d4d is in state SUCCESS 2026-01-07 00:49:01.269054 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:01.269063 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:01.269068 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:01.269072 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:01.269077 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:01.269081 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:01.269089 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:01.269094 | orchestrator | 2026-01-07 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:04.545022 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:04.545095 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:04.545109 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:04.545120 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:04.545130 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:04.545139 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:04.545150 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:04.545161 | orchestrator | 2026-01-07 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:07.385904 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:07.386164 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:07.386734 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:07.387370 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:07.388125 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:07.388838 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:07.389443 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:07.389469 | orchestrator | 2026-01-07 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:10.432472 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:10.432605 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:10.435359 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:10.435663 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:10.439209 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:10.439397 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:10.445496 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:10.445579 | orchestrator | 2026-01-07 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:13.492175 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:13.493429 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:13.495325 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:13.498108 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:13.499976 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:13.501633 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:13.503147 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:13.503397 | orchestrator | 2026-01-07 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:16.550553 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:16.551469 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:16.554737 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:16.555659 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:16.556634 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:16.557875 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:16.558693 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:16.558723 | orchestrator | 2026-01-07 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:19.779138 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:19.779188 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:19.779193 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:19.779197 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:19.779200 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:19.779203 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:19.779206 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state STARTED 2026-01-07 00:49:19.779209 | orchestrator | 2026-01-07 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:22.921357 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:22.921416 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:22.921423 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:22.921428 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:22.921433 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:22.921440 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:22.921448 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 07e0e2f4-aa58-486a-acc6-d029a2cf0f2b is in state SUCCESS 2026-01-07 00:49:22.921455 | orchestrator | 2026-01-07 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:25.952596 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:25.953147 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:25.958549 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:25.958594 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:25.961775 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:25.963472 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:25.963718 | orchestrator | 2026-01-07 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:29.023166 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:29.027574 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:29.030294 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:29.033989 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:29.041264 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:29.047429 | orchestrator | 2026-01-07 00:49:29 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:29.047484 | orchestrator | 2026-01-07 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:32.107809 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:32.109437 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state STARTED 2026-01-07 00:49:32.112293 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:32.114178 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:32.115639 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:32.116789 | orchestrator | 2026-01-07 00:49:32 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:32.116858 | orchestrator | 2026-01-07 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:35.181320 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:35.182519 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task 84f7ff5b-c986-4d86-9b6f-84a4bbc48dd8 is in state SUCCESS 2026-01-07 00:49:35.184867 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:35.186646 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:35.191454 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:35.191509 | orchestrator | 2026-01-07 00:49:35 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:35.191516 | orchestrator | 2026-01-07 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:38.252468 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:38.254890 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:38.255947 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:38.256955 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:38.258938 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:38.258999 | orchestrator | 2026-01-07 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:41.295177 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:41.296069 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:41.296951 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:41.299036 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:41.300967 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:41.301104 | orchestrator | 2026-01-07 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:44.536807 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:44.536945 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:44.536957 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:44.543609 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:44.552553 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:44.555321 | orchestrator | 2026-01-07 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:47.679043 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:47.681619 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:47.682958 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:47.688509 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:47.692363 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:47.692439 | orchestrator | 2026-01-07 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:50.742488 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:50.745914 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:50.746612 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:50.752597 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:50.755597 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:50.757125 | orchestrator | 2026-01-07 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:53.812224 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:53.813333 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:53.815289 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:53.820817 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:53.822334 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:53.822377 | orchestrator | 2026-01-07 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:56.878420 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:56.878504 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:56.880432 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:56.882949 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:56.884636 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:56.884981 | orchestrator | 2026-01-07 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:59.929678 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:49:59.929953 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:49:59.930730 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:49:59.931423 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:49:59.933260 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:49:59.933286 | orchestrator | 2026-01-07 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:02.985152 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:02.985202 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:50:02.986966 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:02.989097 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:02.991203 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:02.992094 | orchestrator | 2026-01-07 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:06.039867 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:06.039943 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:50:06.044565 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:06.045004 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:06.046131 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:06.046173 | orchestrator | 2026-01-07 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:09.091328 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:09.091995 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state STARTED 2026-01-07 00:50:09.093613 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:09.095041 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:09.096009 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:09.096213 | orchestrator | 2026-01-07 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:12.143337 | orchestrator | 2026-01-07 00:50:12.143419 | orchestrator | 2026-01-07 00:50:12.143426 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-07 00:50:12.143431 | orchestrator | 2026-01-07 00:50:12.143436 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-07 00:50:12.143442 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.581) 0:00:00.582 ***** 2026-01-07 00:50:12.143447 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:50:12.143453 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-07 00:50:12.143479 | orchestrator | } 2026-01-07 00:50:12.143483 | orchestrator | 2026-01-07 00:50:12.143488 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-07 00:50:12.143492 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.311) 0:00:00.894 ***** 2026-01-07 00:50:12.143496 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.143501 | orchestrator | 2026-01-07 00:50:12.143505 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-07 00:50:12.143509 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:01.627) 0:00:02.521 ***** 2026-01-07 00:50:12.143513 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-07 00:50:12.143517 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-07 00:50:12.143521 | orchestrator | 2026-01-07 00:50:12.143525 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-07 00:50:12.143529 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:01.392) 0:00:03.914 ***** 2026-01-07 00:50:12.143533 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143537 | orchestrator | 2026-01-07 00:50:12.143540 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-07 00:50:12.143544 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:02.955) 0:00:06.869 ***** 2026-01-07 00:50:12.143548 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143552 | orchestrator | 2026-01-07 00:50:12.143555 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-07 00:50:12.143559 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:01.677) 0:00:08.547 ***** 2026-01-07 00:50:12.143563 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-07 00:50:12.143567 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.143571 | orchestrator | 2026-01-07 00:50:12.143575 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-07 00:50:12.143578 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:27.125) 0:00:35.672 ***** 2026-01-07 00:50:12.143582 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143593 | orchestrator | 2026-01-07 00:50:12.143597 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:12.143601 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.143607 | orchestrator | 2026-01-07 00:50:12.143610 | orchestrator | 2026-01-07 00:50:12.143614 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:12.143618 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:03.636) 0:00:39.309 ***** 2026-01-07 00:50:12.143622 | orchestrator | =============================================================================== 2026-01-07 00:50:12.143626 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.13s 2026-01-07 00:50:12.143630 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.64s 2026-01-07 00:50:12.143633 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.96s 2026-01-07 00:50:12.143650 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.68s 2026-01-07 00:50:12.143654 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.63s 2026-01-07 00:50:12.143658 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.39s 2026-01-07 00:50:12.143662 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.31s 2026-01-07 00:50:12.143666 | orchestrator | 2026-01-07 00:50:12.143672 | orchestrator | 2026-01-07 00:50:12.143683 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-07 00:50:12.143693 | orchestrator | 2026-01-07 00:50:12.143708 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-07 00:50:12.143719 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.684) 0:00:00.684 ***** 2026-01-07 00:50:12.143726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-07 00:50:12.143733 | orchestrator | 2026-01-07 00:50:12.143739 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-07 00:50:12.143745 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.190) 0:00:00.874 ***** 2026-01-07 00:50:12.143750 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-07 00:50:12.143756 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-07 00:50:12.143762 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-07 00:50:12.143768 | orchestrator | 2026-01-07 00:50:12.143774 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-07 00:50:12.143849 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:03.056) 0:00:03.931 ***** 2026-01-07 00:50:12.143854 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143857 | orchestrator | 2026-01-07 00:50:12.143861 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-07 00:50:12.143865 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:02.058) 0:00:05.989 ***** 2026-01-07 00:50:12.143882 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-07 00:50:12.143886 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.143890 | orchestrator | 2026-01-07 00:50:12.143895 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-07 00:50:12.143899 | orchestrator | Wednesday 07 January 2026 00:49:26 +0000 (0:00:35.813) 0:00:41.802 ***** 2026-01-07 00:50:12.143904 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143908 | orchestrator | 2026-01-07 00:50:12.143913 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-07 00:50:12.143917 | orchestrator | Wednesday 07 January 2026 00:49:27 +0000 (0:00:01.603) 0:00:43.406 ***** 2026-01-07 00:50:12.143921 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.143926 | orchestrator | 2026-01-07 00:50:12.143930 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-07 00:50:12.143936 | orchestrator | Wednesday 07 January 2026 00:49:28 +0000 (0:00:00.648) 0:00:44.054 ***** 2026-01-07 00:50:12.143942 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143950 | orchestrator | 2026-01-07 00:50:12.143959 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-07 00:50:12.143965 | orchestrator | Wednesday 07 January 2026 00:49:31 +0000 (0:00:03.015) 0:00:47.069 ***** 2026-01-07 00:50:12.143971 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.143977 | orchestrator | 2026-01-07 00:50:12.143983 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-07 00:50:12.143990 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:00.746) 0:00:47.816 ***** 2026-01-07 00:50:12.143997 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144003 | orchestrator | 2026-01-07 00:50:12.144009 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-07 00:50:12.144022 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:00.620) 0:00:48.437 ***** 2026-01-07 00:50:12.144029 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144039 | orchestrator | 2026-01-07 00:50:12.144045 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:12.144051 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144058 | orchestrator | 2026-01-07 00:50:12.144064 | orchestrator | 2026-01-07 00:50:12.144070 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:12.144076 | orchestrator | Wednesday 07 January 2026 00:49:33 +0000 (0:00:00.860) 0:00:49.297 ***** 2026-01-07 00:50:12.144082 | orchestrator | =============================================================================== 2026-01-07 00:50:12.144088 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.81s 2026-01-07 00:50:12.144094 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.06s 2026-01-07 00:50:12.144100 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.02s 2026-01-07 00:50:12.144106 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.06s 2026-01-07 00:50:12.144113 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.60s 2026-01-07 00:50:12.144119 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.86s 2026-01-07 00:50:12.144127 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.74s 2026-01-07 00:50:12.144131 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-01-07 00:50:12.144135 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-01-07 00:50:12.144140 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.19s 2026-01-07 00:50:12.144144 | orchestrator | 2026-01-07 00:50:12.144148 | orchestrator | 2026-01-07 00:50:12.144153 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:50:12.144157 | orchestrator | 2026-01-07 00:50:12.144162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:50:12.144166 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:00.673) 0:00:00.673 ***** 2026-01-07 00:50:12.144170 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-07 00:50:12.144179 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-07 00:50:12.144184 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-07 00:50:12.144188 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-07 00:50:12.144193 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-07 00:50:12.144198 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-07 00:50:12.144202 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-07 00:50:12.144206 | orchestrator | 2026-01-07 00:50:12.144211 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-07 00:50:12.144215 | orchestrator | 2026-01-07 00:50:12.144220 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-07 00:50:12.144224 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:02.079) 0:00:02.753 ***** 2026-01-07 00:50:12.144252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:50:12.144259 | orchestrator | 2026-01-07 00:50:12.144263 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-07 00:50:12.144267 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:02.070) 0:00:04.824 ***** 2026-01-07 00:50:12.144271 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:12.144274 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:12.144284 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:12.144287 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:12.144291 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144300 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:12.144304 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:12.144308 | orchestrator | 2026-01-07 00:50:12.144312 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-07 00:50:12.144316 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:02.224) 0:00:07.048 ***** 2026-01-07 00:50:12.144320 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:12.144323 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:12.144327 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:12.144331 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:12.144335 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:12.144338 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:12.144342 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144346 | orchestrator | 2026-01-07 00:50:12.144350 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-07 00:50:12.144354 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:03.270) 0:00:10.319 ***** 2026-01-07 00:50:12.144357 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144361 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:12.144365 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:12.144369 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:12.144373 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:12.144376 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:12.144380 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:12.144384 | orchestrator | 2026-01-07 00:50:12.144388 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-07 00:50:12.144392 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:02.084) 0:00:12.404 ***** 2026-01-07 00:50:12.144396 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:12.144400 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:12.144403 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:12.144407 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:12.144411 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:12.144415 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:12.144418 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144422 | orchestrator | 2026-01-07 00:50:12.144426 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-07 00:50:12.144430 | orchestrator | Wednesday 07 January 2026 00:49:08 +0000 (0:00:12.291) 0:00:24.695 ***** 2026-01-07 00:50:12.144434 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:12.144438 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:12.144441 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:12.144445 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:12.144449 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:12.144453 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:12.144456 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144460 | orchestrator | 2026-01-07 00:50:12.144464 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-07 00:50:12.144468 | orchestrator | Wednesday 07 January 2026 00:49:49 +0000 (0:00:40.992) 0:01:05.688 ***** 2026-01-07 00:50:12.144472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:50:12.144478 | orchestrator | 2026-01-07 00:50:12.144482 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-07 00:50:12.144486 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:02.370) 0:01:08.058 ***** 2026-01-07 00:50:12.144489 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-07 00:50:12.144494 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-07 00:50:12.144501 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-07 00:50:12.144505 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-07 00:50:12.144509 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-07 00:50:12.144513 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-07 00:50:12.144516 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-07 00:50:12.144520 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-07 00:50:12.144524 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-07 00:50:12.144528 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-07 00:50:12.144535 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-07 00:50:12.144538 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-07 00:50:12.144542 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-07 00:50:12.144546 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-07 00:50:12.144550 | orchestrator | 2026-01-07 00:50:12.144554 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-07 00:50:12.144558 | orchestrator | Wednesday 07 January 2026 00:49:56 +0000 (0:00:04.871) 0:01:12.929 ***** 2026-01-07 00:50:12.144562 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144565 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:12.144569 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:12.144573 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:12.144577 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:12.144581 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:12.144584 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:12.144588 | orchestrator | 2026-01-07 00:50:12.144592 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-07 00:50:12.144596 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:01.176) 0:01:14.105 ***** 2026-01-07 00:50:12.144600 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:12.144603 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144607 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:12.144611 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:12.144615 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:12.144619 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:12.144622 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:12.144626 | orchestrator | 2026-01-07 00:50:12.144630 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-07 00:50:12.144637 | orchestrator | Wednesday 07 January 2026 00:49:59 +0000 (0:00:01.587) 0:01:15.693 ***** 2026-01-07 00:50:12.144641 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:12.144645 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:12.144648 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:12.144652 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:12.144656 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144660 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:12.144663 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:12.144667 | orchestrator | 2026-01-07 00:50:12.144671 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-07 00:50:12.144675 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:01.389) 0:01:17.082 ***** 2026-01-07 00:50:12.144679 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:12.144682 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:12.144686 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:12.144690 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:12.144694 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:12.144698 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:12.144701 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:12.144705 | orchestrator | 2026-01-07 00:50:12.144709 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-07 00:50:12.144713 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:02.218) 0:01:19.301 ***** 2026-01-07 00:50:12.144723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-07 00:50:12.144728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:50:12.144732 | orchestrator | 2026-01-07 00:50:12.144736 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-07 00:50:12.144740 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:01.872) 0:01:21.174 ***** 2026-01-07 00:50:12.144744 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144748 | orchestrator | 2026-01-07 00:50:12.144751 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-07 00:50:12.144755 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:02.382) 0:01:23.556 ***** 2026-01-07 00:50:12.144759 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:12.144763 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:12.144767 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:12.144771 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:12.144774 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:12.144794 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:12.144798 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:12.144802 | orchestrator | 2026-01-07 00:50:12.144805 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:12.144809 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144813 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144817 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144821 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144825 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144829 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144835 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:12.144839 | orchestrator | 2026-01-07 00:50:12.144843 | orchestrator | 2026-01-07 00:50:12.144847 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:12.144850 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:03.008) 0:01:26.565 ***** 2026-01-07 00:50:12.144854 | orchestrator | =============================================================================== 2026-01-07 00:50:12.144858 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.99s 2026-01-07 00:50:12.144862 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.29s 2026-01-07 00:50:12.144866 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.87s 2026-01-07 00:50:12.144869 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.27s 2026-01-07 00:50:12.144873 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.01s 2026-01-07 00:50:12.144877 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.38s 2026-01-07 00:50:12.144881 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.37s 2026-01-07 00:50:12.144884 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.22s 2026-01-07 00:50:12.144892 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.22s 2026-01-07 00:50:12.144895 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.08s 2026-01-07 00:50:12.144899 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.08s 2026-01-07 00:50:12.144905 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.07s 2026-01-07 00:50:12.144909 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.87s 2026-01-07 00:50:12.144913 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.59s 2026-01-07 00:50:12.144917 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.39s 2026-01-07 00:50:12.144921 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.18s 2026-01-07 00:50:12.144925 | orchestrator | 2026-01-07 00:50:12 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:12.144929 | orchestrator | 2026-01-07 00:50:12 | INFO  | Task 538e3d94-539c-4c48-98c4-604d606ae7fe is in state SUCCESS 2026-01-07 00:50:12.144933 | orchestrator | 2026-01-07 00:50:12 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:12.144940 | orchestrator | 2026-01-07 00:50:12 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:12.145521 | orchestrator | 2026-01-07 00:50:12 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:12.145679 | orchestrator | 2026-01-07 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:15.198486 | orchestrator | 2026-01-07 00:50:15 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:15.200737 | orchestrator | 2026-01-07 00:50:15 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:15.205729 | orchestrator | 2026-01-07 00:50:15 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:15.212040 | orchestrator | 2026-01-07 00:50:15 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:15.212140 | orchestrator | 2026-01-07 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:18.252966 | orchestrator | 2026-01-07 00:50:18 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:18.254671 | orchestrator | 2026-01-07 00:50:18 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:18.255486 | orchestrator | 2026-01-07 00:50:18 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:18.256923 | orchestrator | 2026-01-07 00:50:18 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:18.256964 | orchestrator | 2026-01-07 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:21.303613 | orchestrator | 2026-01-07 00:50:21 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state STARTED 2026-01-07 00:50:21.306881 | orchestrator | 2026-01-07 00:50:21 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:21.310392 | orchestrator | 2026-01-07 00:50:21 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:21.314219 | orchestrator | 2026-01-07 00:50:21 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:21.314407 | orchestrator | 2026-01-07 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:24.374736 | orchestrator | 2026-01-07 00:50:24 | INFO  | Task e7a0cfc1-1eb7-4b22-85f4-eaa5df8f5ef2 is in state SUCCESS 2026-01-07 00:50:24.376807 | orchestrator | 2026-01-07 00:50:24 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:24.379699 | orchestrator | 2026-01-07 00:50:24 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:24.382869 | orchestrator | 2026-01-07 00:50:24 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:24.382915 | orchestrator | 2026-01-07 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:27.445104 | orchestrator | 2026-01-07 00:50:27 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:27.451555 | orchestrator | 2026-01-07 00:50:27 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:27.457413 | orchestrator | 2026-01-07 00:50:27 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:27.457471 | orchestrator | 2026-01-07 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:30.500194 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:30.501272 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:30.502901 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:30.502982 | orchestrator | 2026-01-07 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:33.585871 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:33.589701 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:33.591644 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:33.591694 | orchestrator | 2026-01-07 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:36.630915 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:36.632845 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:36.634889 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:36.634925 | orchestrator | 2026-01-07 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:39.686547 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:39.688360 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:39.690419 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:39.690475 | orchestrator | 2026-01-07 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:42.742764 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:42.743852 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:42.744838 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:42.744866 | orchestrator | 2026-01-07 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:45.789839 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:45.794767 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:45.795915 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:45.796056 | orchestrator | 2026-01-07 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:48.847346 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:48.849273 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:48.850550 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:48.850917 | orchestrator | 2026-01-07 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:51.900785 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:51.900855 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:51.900865 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:51.900871 | orchestrator | 2026-01-07 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:54.929834 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:54.931143 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:54.933491 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:54.933540 | orchestrator | 2026-01-07 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:57.980092 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:50:57.983340 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:50:57.984811 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:50:57.984847 | orchestrator | 2026-01-07 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:01.027175 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:01.029236 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state STARTED 2026-01-07 00:51:01.031034 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:01.031089 | orchestrator | 2026-01-07 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:04.097147 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:04.097208 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:04.097755 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:04.098637 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state STARTED 2026-01-07 00:51:04.099423 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:04.103038 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 133ccc97-33eb-4536-9669-c9e9d3b3b426 is in state SUCCESS 2026-01-07 00:51:04.104776 | orchestrator | 2026-01-07 00:51:04.104834 | orchestrator | 2026-01-07 00:51:04.104844 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-07 00:51:04.104852 | orchestrator | 2026-01-07 00:51:04.104858 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-07 00:51:04.104865 | orchestrator | Wednesday 07 January 2026 00:49:05 +0000 (0:00:00.359) 0:00:00.359 ***** 2026-01-07 00:51:04.104871 | orchestrator | ok: [testbed-manager] 2026-01-07 00:51:04.104878 | orchestrator | 2026-01-07 00:51:04.104885 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-07 00:51:04.104951 | orchestrator | Wednesday 07 January 2026 00:49:06 +0000 (0:00:01.161) 0:00:01.521 ***** 2026-01-07 00:51:04.104960 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-07 00:51:04.104967 | orchestrator | 2026-01-07 00:51:04.104973 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-07 00:51:04.104979 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:00.703) 0:00:02.225 ***** 2026-01-07 00:51:04.104986 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.104993 | orchestrator | 2026-01-07 00:51:04.105000 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-07 00:51:04.105006 | orchestrator | Wednesday 07 January 2026 00:49:08 +0000 (0:00:01.105) 0:00:03.331 ***** 2026-01-07 00:51:04.105013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-07 00:51:04.105020 | orchestrator | ok: [testbed-manager] 2026-01-07 00:51:04.105026 | orchestrator | 2026-01-07 00:51:04.105032 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-07 00:51:04.105038 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:59.745) 0:01:03.076 ***** 2026-01-07 00:51:04.105044 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.105050 | orchestrator | 2026-01-07 00:51:04.105056 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:04.105063 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:04.105070 | orchestrator | 2026-01-07 00:51:04.105076 | orchestrator | 2026-01-07 00:51:04.105081 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:04.105164 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:15.300) 0:01:18.376 ***** 2026-01-07 00:51:04.105172 | orchestrator | =============================================================================== 2026-01-07 00:51:04.105178 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 59.75s 2026-01-07 00:51:04.105184 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 15.30s 2026-01-07 00:51:04.105190 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.16s 2026-01-07 00:51:04.105196 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.11s 2026-01-07 00:51:04.105201 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2026-01-07 00:51:04.105207 | orchestrator | 2026-01-07 00:51:04.105213 | orchestrator | 2026-01-07 00:51:04.105220 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-07 00:51:04.105226 | orchestrator | 2026-01-07 00:51:04.105232 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:51:04.105238 | orchestrator | Wednesday 07 January 2026 00:48:35 +0000 (0:00:00.222) 0:00:00.222 ***** 2026-01-07 00:51:04.105245 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:51:04.105253 | orchestrator | 2026-01-07 00:51:04.105259 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-07 00:51:04.105265 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:01.204) 0:00:01.427 ***** 2026-01-07 00:51:04.105285 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105292 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105299 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105305 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105312 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105318 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105325 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105331 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:51:04.105337 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105343 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105350 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105356 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105363 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105370 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105378 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:51:04.105385 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105406 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105414 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105422 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105429 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105438 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:51:04.105445 | orchestrator | 2026-01-07 00:51:04.105453 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:51:04.105461 | orchestrator | Wednesday 07 January 2026 00:48:41 +0000 (0:00:04.288) 0:00:05.715 ***** 2026-01-07 00:51:04.105469 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:51:04.105477 | orchestrator | 2026-01-07 00:51:04.105485 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-07 00:51:04.105493 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:01.428) 0:00:07.143 ***** 2026-01-07 00:51:04.105504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105534 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105589 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.105750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.105784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106473 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.106493 | orchestrator | 2026-01-07 00:51:04.106500 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-07 00:51:04.106507 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:05.996) 0:00:13.140 ***** 2026-01-07 00:51:04.106529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106538 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106557 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:51:04.106567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106634 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:04.106831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106860 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:04.106867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106918 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:04.106925 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:04.106931 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:04.106938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.106945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.106959 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:04.106967 | orchestrator | 2026-01-07 00:51:04.106974 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-07 00:51:04.106981 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:01.619) 0:00:14.759 ***** 2026-01-07 00:51:04.106989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107003 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107017 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107030 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:51:04.107038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107066 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:04.107073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107093 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:04.107100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107132 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:04.107138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107163 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:04.107169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107201 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:04.107209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:51:04.107216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.107234 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:04.107241 | orchestrator | 2026-01-07 00:51:04.107248 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-07 00:51:04.107254 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:02.622) 0:00:17.382 ***** 2026-01-07 00:51:04.107262 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:51:04.107268 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:04.107275 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:04.107282 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:04.107288 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:04.107295 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:04.107302 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:04.107309 | orchestrator | 2026-01-07 00:51:04.107315 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-07 00:51:04.107322 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:01.367) 0:00:18.750 ***** 2026-01-07 00:51:04.107329 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:51:04.107336 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:04.107342 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:04.107349 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:04.107356 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:04.107363 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:04.107370 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:04.107376 | orchestrator | 2026-01-07 00:51:04.107397 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-07 00:51:04.107404 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:01.861) 0:00:20.611 ***** 2026-01-07 00:51:04.107411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107461 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.107511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107519 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107614 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.107621 | orchestrator | 2026-01-07 00:51:04.107627 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-07 00:51:04.107633 | orchestrator | Wednesday 07 January 2026 00:49:05 +0000 (0:00:09.278) 0:00:29.890 ***** 2026-01-07 00:51:04.107639 | orchestrator | [WARNING]: Skipped 2026-01-07 00:51:04.107661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-07 00:51:04.107668 | orchestrator | to this access issue: 2026-01-07 00:51:04.107674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-07 00:51:04.107680 | orchestrator | directory 2026-01-07 00:51:04.107686 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:51:04.107692 | orchestrator | 2026-01-07 00:51:04.107698 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-07 00:51:04.107712 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:01.429) 0:00:31.319 ***** 2026-01-07 00:51:04.107724 | orchestrator | [WARNING]: Skipped 2026-01-07 00:51:04.107731 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-07 00:51:04.107737 | orchestrator | to this access issue: 2026-01-07 00:51:04.107751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-07 00:51:04.107758 | orchestrator | directory 2026-01-07 00:51:04.107765 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:51:04.107771 | orchestrator | 2026-01-07 00:51:04.107778 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-07 00:51:04.107784 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:00.775) 0:00:32.095 ***** 2026-01-07 00:51:04.107790 | orchestrator | [WARNING]: Skipped 2026-01-07 00:51:04.107796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-07 00:51:04.107802 | orchestrator | to this access issue: 2026-01-07 00:51:04.107808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-07 00:51:04.107814 | orchestrator | directory 2026-01-07 00:51:04.107820 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:51:04.107826 | orchestrator | 2026-01-07 00:51:04.107832 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-07 00:51:04.107839 | orchestrator | Wednesday 07 January 2026 00:49:08 +0000 (0:00:00.820) 0:00:32.916 ***** 2026-01-07 00:51:04.107845 | orchestrator | [WARNING]: Skipped 2026-01-07 00:51:04.107851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-07 00:51:04.107857 | orchestrator | to this access issue: 2026-01-07 00:51:04.107864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-07 00:51:04.107871 | orchestrator | directory 2026-01-07 00:51:04.107877 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:51:04.107883 | orchestrator | 2026-01-07 00:51:04.107889 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-07 00:51:04.107896 | orchestrator | Wednesday 07 January 2026 00:49:09 +0000 (0:00:01.038) 0:00:33.955 ***** 2026-01-07 00:51:04.107902 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.107909 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.107915 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.107922 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.107929 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.107935 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.107941 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.107948 | orchestrator | 2026-01-07 00:51:04.107955 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-07 00:51:04.107962 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:04.163) 0:00:38.118 ***** 2026-01-07 00:51:04.107969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.107976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.107982 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.107997 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.108007 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.108014 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.108021 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:51:04.108028 | orchestrator | 2026-01-07 00:51:04.108034 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-07 00:51:04.108041 | orchestrator | Wednesday 07 January 2026 00:49:16 +0000 (0:00:03.003) 0:00:41.122 ***** 2026-01-07 00:51:04.108053 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.108061 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.108067 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.108074 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.108080 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.108086 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.108093 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.108099 | orchestrator | 2026-01-07 00:51:04.108105 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-07 00:51:04.108111 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:04.737) 0:00:45.859 ***** 2026-01-07 00:51:04.108118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108136 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108191 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108205 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108223 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108234 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:04.108275 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108282 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108289 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108295 | orchestrator | 2026-01-07 00:51:04.108302 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-07 00:51:04.108312 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:03.542) 0:00:49.401 ***** 2026-01-07 00:51:04.108319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108332 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108342 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108349 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108356 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108362 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:51:04.108369 | orchestrator | 2026-01-07 00:51:04.108375 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-07 00:51:04.108382 | orchestrator | Wednesday 07 January 2026 00:49:29 +0000 (0:00:04.591) 0:00:53.993 ***** 2026-01-07 00:51:04.108389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108401 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108414 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108421 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108428 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:51:04.108434 | orchestrator | 2026-01-07 00:51:04.108441 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-07 00:51:04.108448 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:03.013) 0:00:57.006 ***** 2026-01-07 00:51:04.108457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108464 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108520 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108560 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:51:04.108584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:04.108631 | orchestrator | 2026-01-07 00:51:04.108640 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-07 00:51:04.108669 | orchestrator | Wednesday 07 January 2026 00:49:37 +0000 (0:00:04.997) 0:01:02.004 ***** 2026-01-07 00:51:04.108676 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.108683 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.108689 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.108696 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.108703 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.108710 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.108716 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.108723 | orchestrator | 2026-01-07 00:51:04.108730 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-07 00:51:04.108737 | orchestrator | Wednesday 07 January 2026 00:49:39 +0000 (0:00:01.830) 0:01:03.834 ***** 2026-01-07 00:51:04.108744 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.108751 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.108758 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.108765 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.108771 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.108778 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.108784 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.108791 | orchestrator | 2026-01-07 00:51:04.108797 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108802 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:01.225) 0:01:05.060 ***** 2026-01-07 00:51:04.108809 | orchestrator | 2026-01-07 00:51:04.108815 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108820 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:00.068) 0:01:05.129 ***** 2026-01-07 00:51:04.108827 | orchestrator | 2026-01-07 00:51:04.108833 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108839 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:00.063) 0:01:05.192 ***** 2026-01-07 00:51:04.108845 | orchestrator | 2026-01-07 00:51:04.108851 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108858 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.242) 0:01:05.435 ***** 2026-01-07 00:51:04.108864 | orchestrator | 2026-01-07 00:51:04.108871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108886 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.064) 0:01:05.500 ***** 2026-01-07 00:51:04.108893 | orchestrator | 2026-01-07 00:51:04.108899 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108906 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.060) 0:01:05.560 ***** 2026-01-07 00:51:04.108912 | orchestrator | 2026-01-07 00:51:04.108918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:51:04.108925 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.065) 0:01:05.625 ***** 2026-01-07 00:51:04.108931 | orchestrator | 2026-01-07 00:51:04.108938 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-07 00:51:04.108944 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.091) 0:01:05.716 ***** 2026-01-07 00:51:04.108951 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.108957 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.108963 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.108969 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.108976 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.108982 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.108988 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.108994 | orchestrator | 2026-01-07 00:51:04.109001 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-07 00:51:04.109008 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:33.582) 0:01:39.299 ***** 2026-01-07 00:51:04.109015 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.109022 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.109028 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.109034 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.109041 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.109047 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.109054 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.109060 | orchestrator | 2026-01-07 00:51:04.109067 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-07 00:51:04.109073 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:33.995) 0:02:13.295 ***** 2026-01-07 00:51:04.109080 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:04.109087 | orchestrator | ok: [testbed-manager] 2026-01-07 00:51:04.109094 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:04.109100 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:04.109107 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:51:04.109113 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:51:04.109120 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:51:04.109126 | orchestrator | 2026-01-07 00:51:04.109132 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-07 00:51:04.109139 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:02.105) 0:02:15.400 ***** 2026-01-07 00:51:04.109146 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:04.109153 | orchestrator | changed: [testbed-manager] 2026-01-07 00:51:04.109159 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:04.109166 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:04.109172 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:04.109178 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:04.109185 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:04.109192 | orchestrator | 2026-01-07 00:51:04.109199 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:04.109206 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109213 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109226 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109239 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109246 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109253 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109259 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:51:04.109266 | orchestrator | 2026-01-07 00:51:04.109272 | orchestrator | 2026-01-07 00:51:04.109279 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:04.109285 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:09.895) 0:02:25.296 ***** 2026-01-07 00:51:04.109291 | orchestrator | =============================================================================== 2026-01-07 00:51:04.109297 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.00s 2026-01-07 00:51:04.109303 | orchestrator | common : Restart fluentd container ------------------------------------- 33.58s 2026-01-07 00:51:04.109310 | orchestrator | common : Restart cron container ----------------------------------------- 9.90s 2026-01-07 00:51:04.109316 | orchestrator | common : Copying over config.json files for services -------------------- 9.28s 2026-01-07 00:51:04.109322 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.00s 2026-01-07 00:51:04.109328 | orchestrator | common : Check common containers ---------------------------------------- 5.00s 2026-01-07 00:51:04.109334 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.74s 2026-01-07 00:51:04.109343 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.59s 2026-01-07 00:51:04.109350 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.29s 2026-01-07 00:51:04.109356 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.16s 2026-01-07 00:51:04.109363 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.54s 2026-01-07 00:51:04.109369 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.01s 2026-01-07 00:51:04.109376 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.00s 2026-01-07 00:51:04.109383 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.62s 2026-01-07 00:51:04.109390 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.11s 2026-01-07 00:51:04.109396 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.86s 2026-01-07 00:51:04.109403 | orchestrator | common : Creating log volume -------------------------------------------- 1.83s 2026-01-07 00:51:04.109410 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.62s 2026-01-07 00:51:04.109417 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.43s 2026-01-07 00:51:04.109424 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2026-01-07 00:51:04.109431 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:04.109439 | orchestrator | 2026-01-07 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:07.152942 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:07.154557 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:07.155839 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:07.156706 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state STARTED 2026-01-07 00:51:07.160066 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:07.160897 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:07.160943 | orchestrator | 2026-01-07 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:10.182482 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:10.183520 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:10.184546 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:10.185372 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state STARTED 2026-01-07 00:51:10.188259 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:10.190033 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:10.190083 | orchestrator | 2026-01-07 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:13.227633 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:13.227678 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:13.228183 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:13.228867 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state STARTED 2026-01-07 00:51:13.232257 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:13.233063 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:13.233091 | orchestrator | 2026-01-07 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:16.306761 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:16.307490 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:16.310737 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:16.311276 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state STARTED 2026-01-07 00:51:16.312220 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:16.312854 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:16.312917 | orchestrator | 2026-01-07 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:19.364530 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:19.364615 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:19.364627 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:19.365061 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 36815516-a313-4191-9078-1ef949755bba is in state SUCCESS 2026-01-07 00:51:19.366476 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:19.366641 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:19.366655 | orchestrator | 2026-01-07 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:22.403570 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:22.405908 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:22.405967 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:22.407167 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:22.408880 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:22.410264 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:22.410303 | orchestrator | 2026-01-07 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:25.521120 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:25.521186 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:25.521195 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:25.528567 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:25.534319 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:25.538164 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:25.538227 | orchestrator | 2026-01-07 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:28.592736 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:28.594861 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:28.596597 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:28.597436 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:28.599009 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:28.602385 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:28.602433 | orchestrator | 2026-01-07 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:31.669934 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:31.670684 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state STARTED 2026-01-07 00:51:31.671796 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:31.672881 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:31.675147 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:31.675946 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:31.675984 | orchestrator | 2026-01-07 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:34.724120 | orchestrator | 2026-01-07 00:51:34.724238 | orchestrator | 2026-01-07 00:51:34.724251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:51:34.724261 | orchestrator | 2026-01-07 00:51:34.724269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:51:34.724276 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.329) 0:00:00.329 ***** 2026-01-07 00:51:34.724283 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:34.724290 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:34.724296 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:34.724303 | orchestrator | 2026-01-07 00:51:34.724310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:51:34.724316 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.364) 0:00:00.694 ***** 2026-01-07 00:51:34.724323 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-07 00:51:34.724330 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-07 00:51:34.724337 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-07 00:51:34.724343 | orchestrator | 2026-01-07 00:51:34.724349 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-07 00:51:34.724356 | orchestrator | 2026-01-07 00:51:34.724363 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-07 00:51:34.724370 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.510) 0:00:01.204 ***** 2026-01-07 00:51:34.724377 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:51:34.724384 | orchestrator | 2026-01-07 00:51:34.724389 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-07 00:51:34.724396 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.551) 0:00:01.756 ***** 2026-01-07 00:51:34.724403 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:51:34.724409 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:51:34.724414 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:51:34.724418 | orchestrator | 2026-01-07 00:51:34.724422 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-07 00:51:34.724426 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.826) 0:00:02.583 ***** 2026-01-07 00:51:34.724429 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:51:34.724433 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:51:34.724437 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:51:34.724441 | orchestrator | 2026-01-07 00:51:34.724445 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-07 00:51:34.724448 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:01.784) 0:00:04.367 ***** 2026-01-07 00:51:34.724452 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:34.724456 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:34.724460 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:34.724463 | orchestrator | 2026-01-07 00:51:34.724467 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-07 00:51:34.724472 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:03.309) 0:00:07.677 ***** 2026-01-07 00:51:34.724478 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:34.724486 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:34.724511 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:34.724519 | orchestrator | 2026-01-07 00:51:34.724526 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:34.724532 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.724567 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.724572 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.724576 | orchestrator | 2026-01-07 00:51:34.724580 | orchestrator | 2026-01-07 00:51:34.724584 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:34.724587 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:03.900) 0:00:11.577 ***** 2026-01-07 00:51:34.724591 | orchestrator | =============================================================================== 2026-01-07 00:51:34.724595 | orchestrator | memcached : Restart memcached container --------------------------------- 3.90s 2026-01-07 00:51:34.724599 | orchestrator | memcached : Check memcached container ----------------------------------- 3.31s 2026-01-07 00:51:34.724603 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.78s 2026-01-07 00:51:34.724606 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.83s 2026-01-07 00:51:34.724610 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-01-07 00:51:34.724614 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-01-07 00:51:34.724617 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-01-07 00:51:34.724621 | orchestrator | 2026-01-07 00:51:34.724625 | orchestrator | 2026-01-07 00:51:34.724629 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:51:34.724632 | orchestrator | 2026-01-07 00:51:34.724636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:51:34.724640 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.315) 0:00:00.315 ***** 2026-01-07 00:51:34.724644 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:34.724648 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:34.724652 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:34.724655 | orchestrator | 2026-01-07 00:51:34.724660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:51:34.724673 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.361) 0:00:00.677 ***** 2026-01-07 00:51:34.724677 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-07 00:51:34.724683 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-07 00:51:34.724689 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-07 00:51:34.724697 | orchestrator | 2026-01-07 00:51:34.724707 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-07 00:51:34.724712 | orchestrator | 2026-01-07 00:51:34.724718 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-07 00:51:34.724723 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.528) 0:00:01.205 ***** 2026-01-07 00:51:34.724729 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:51:34.724734 | orchestrator | 2026-01-07 00:51:34.724740 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-07 00:51:34.724746 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.528) 0:00:01.734 ***** 2026-01-07 00:51:34.724753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724822 | orchestrator | 2026-01-07 00:51:34.724828 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-07 00:51:34.724835 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:01.452) 0:00:03.187 ***** 2026-01-07 00:51:34.724841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724877 | orchestrator | 2026-01-07 00:51:34.724881 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-07 00:51:34.724884 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:03.018) 0:00:06.206 ***** 2026-01-07 00:51:34.724888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724917 | orchestrator | 2026-01-07 00:51:34.724923 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-07 00:51:34.724927 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:03.868) 0:00:10.074 ***** 2026-01-07 00:51:34.724931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:51:34.724962 | orchestrator | 2026-01-07 00:51:34.724966 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:51:34.724970 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:01.866) 0:00:11.941 ***** 2026-01-07 00:51:34.724974 | orchestrator | 2026-01-07 00:51:34.724978 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:51:34.725032 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:00.125) 0:00:12.066 ***** 2026-01-07 00:51:34.725042 | orchestrator | 2026-01-07 00:51:34.725048 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:51:34.725055 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:00.100) 0:00:12.167 ***** 2026-01-07 00:51:34.725060 | orchestrator | 2026-01-07 00:51:34.725066 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-07 00:51:34.725072 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:00.079) 0:00:12.246 ***** 2026-01-07 00:51:34.725078 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:34.725163 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:34.725175 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:34.725181 | orchestrator | 2026-01-07 00:51:34.725188 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-07 00:51:34.725194 | orchestrator | Wednesday 07 January 2026 00:51:28 +0000 (0:00:10.144) 0:00:22.391 ***** 2026-01-07 00:51:34.725201 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:34.725208 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:34.725214 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:34.725220 | orchestrator | 2026-01-07 00:51:34.725226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:34.725233 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.725240 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.725247 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:51:34.725254 | orchestrator | 2026-01-07 00:51:34.725260 | orchestrator | 2026-01-07 00:51:34.725263 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:34.725267 | orchestrator | Wednesday 07 January 2026 00:51:33 +0000 (0:00:05.276) 0:00:27.667 ***** 2026-01-07 00:51:34.725271 | orchestrator | =============================================================================== 2026-01-07 00:51:34.725275 | orchestrator | redis : Restart redis container ---------------------------------------- 10.14s 2026-01-07 00:51:34.725279 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.28s 2026-01-07 00:51:34.725282 | orchestrator | redis : Copying over redis config files --------------------------------- 3.87s 2026-01-07 00:51:34.725286 | orchestrator | redis : Copying over default config.json files -------------------------- 3.02s 2026-01-07 00:51:34.725290 | orchestrator | redis : Check redis containers ------------------------------------------ 1.87s 2026-01-07 00:51:34.725294 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.45s 2026-01-07 00:51:34.725298 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-01-07 00:51:34.725301 | orchestrator | redis : include_tasks --------------------------------------------------- 0.53s 2026-01-07 00:51:34.725305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-01-07 00:51:34.725309 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2026-01-07 00:51:34.725313 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:34.725317 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task bcd2185f-72b6-49d8-b693-d90299de53d8 is in state SUCCESS 2026-01-07 00:51:34.725321 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:34.725325 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:34.725337 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:34.726228 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:34.726261 | orchestrator | 2026-01-07 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:37.773607 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:37.773676 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:37.811757 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:37.811811 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:37.811835 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:37.811845 | orchestrator | 2026-01-07 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:40.830152 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:40.831287 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:40.835191 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:40.836257 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:40.837851 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:40.837990 | orchestrator | 2026-01-07 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:43.891589 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:43.891657 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:43.895358 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:43.901372 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:43.901428 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:43.901436 | orchestrator | 2026-01-07 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:46.968509 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:46.968668 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:46.968679 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:46.968694 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:46.971501 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:46.972322 | orchestrator | 2026-01-07 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:50.079093 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:50.086050 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:50.089115 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:50.089767 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:50.090208 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:50.090228 | orchestrator | 2026-01-07 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:53.128625 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:53.130806 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:53.131314 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:53.132356 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:53.132978 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:53.133001 | orchestrator | 2026-01-07 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:56.158843 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:56.159965 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:56.161359 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:56.162128 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:56.164491 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:56.166516 | orchestrator | 2026-01-07 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:59.222105 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:51:59.223050 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:51:59.226146 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:51:59.227149 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:51:59.227754 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:51:59.227938 | orchestrator | 2026-01-07 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:02.270432 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:02.271270 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:02.272463 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:02.273443 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:52:02.274567 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:02.274673 | orchestrator | 2026-01-07 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:05.309185 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:05.311362 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:05.312263 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:05.313329 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:52:05.314411 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:05.314575 | orchestrator | 2026-01-07 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:08.380036 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:08.380357 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:08.381224 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:08.382136 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:52:08.382935 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:08.382971 | orchestrator | 2026-01-07 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:11.417651 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:11.418392 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:11.419482 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:11.420848 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:52:11.422190 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:11.422208 | orchestrator | 2026-01-07 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:14.464141 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:14.464585 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:14.466579 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:14.467307 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state STARTED 2026-01-07 00:52:14.470168 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:14.470204 | orchestrator | 2026-01-07 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:17.511950 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:17.513024 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:17.514669 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:17.516926 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:17.519075 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 291bd727-7305-41bd-9974-32f1a236972a is in state SUCCESS 2026-01-07 00:52:17.520275 | orchestrator | 2026-01-07 00:52:17.520303 | orchestrator | 2026-01-07 00:52:17.520308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:52:17.520313 | orchestrator | 2026-01-07 00:52:17.520317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:52:17.520321 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-01-07 00:52:17.520326 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:17.520330 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:17.520334 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:17.520338 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:17.520341 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:17.520345 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:17.520349 | orchestrator | 2026-01-07 00:52:17.520353 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:52:17.520357 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.829) 0:00:01.108 ***** 2026-01-07 00:52:17.520361 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520365 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520368 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520372 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520376 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520380 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:52:17.520419 | orchestrator | 2026-01-07 00:52:17.520424 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-07 00:52:17.520428 | orchestrator | 2026-01-07 00:52:17.520432 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-07 00:52:17.520435 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.801) 0:00:01.910 ***** 2026-01-07 00:52:17.520440 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:52:17.520444 | orchestrator | 2026-01-07 00:52:17.520460 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:52:17.520465 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:01.383) 0:00:03.293 ***** 2026-01-07 00:52:17.520469 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:52:17.520473 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:52:17.520477 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:52:17.520481 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:52:17.520484 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:52:17.520488 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:52:17.520519 | orchestrator | 2026-01-07 00:52:17.520524 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:52:17.520527 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:01.508) 0:00:04.802 ***** 2026-01-07 00:52:17.520531 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:52:17.520535 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:52:17.520539 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:52:17.520543 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:52:17.520547 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:52:17.520550 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:52:17.520554 | orchestrator | 2026-01-07 00:52:17.520568 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:52:17.520572 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:02.072) 0:00:06.874 ***** 2026-01-07 00:52:17.520575 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-07 00:52:17.520579 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-07 00:52:17.520583 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:17.520587 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:17.520591 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-07 00:52:17.520595 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-07 00:52:17.520598 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:17.520602 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-07 00:52:17.520606 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:17.520616 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:17.520620 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-07 00:52:17.520624 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:17.520628 | orchestrator | 2026-01-07 00:52:17.520631 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-07 00:52:17.520635 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:02.539) 0:00:09.414 ***** 2026-01-07 00:52:17.520639 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:17.520643 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:17.520646 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:17.520650 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:17.520654 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:17.520658 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:17.520661 | orchestrator | 2026-01-07 00:52:17.520665 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-07 00:52:17.520669 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:01.236) 0:00:10.650 ***** 2026-01-07 00:52:17.520683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520776 | orchestrator | 2026-01-07 00:52:17.520783 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-07 00:52:17.520789 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:01.819) 0:00:12.470 ***** 2026-01-07 00:52:17.520796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520869 | orchestrator | 2026-01-07 00:52:17.520873 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-07 00:52:17.520877 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:05.041) 0:00:17.511 ***** 2026-01-07 00:52:17.520881 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:17.520885 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:17.520889 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:17.520893 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:17.520896 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:17.520900 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:17.520914 | orchestrator | 2026-01-07 00:52:17.520918 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-07 00:52:17.520922 | orchestrator | Wednesday 07 January 2026 00:51:26 +0000 (0:00:02.409) 0:00:19.921 ***** 2026-01-07 00:52:17.520926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:52:17.520991 | orchestrator | 2026-01-07 00:52:17.520995 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.520999 | orchestrator | Wednesday 07 January 2026 00:51:29 +0000 (0:00:02.689) 0:00:22.611 ***** 2026-01-07 00:52:17.521003 | orchestrator | 2026-01-07 00:52:17.521007 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.521011 | orchestrator | Wednesday 07 January 2026 00:51:29 +0000 (0:00:00.503) 0:00:23.114 ***** 2026-01-07 00:52:17.521015 | orchestrator | 2026-01-07 00:52:17.521018 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.521022 | orchestrator | Wednesday 07 January 2026 00:51:30 +0000 (0:00:00.422) 0:00:23.536 ***** 2026-01-07 00:52:17.521026 | orchestrator | 2026-01-07 00:52:17.521030 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.521034 | orchestrator | Wednesday 07 January 2026 00:51:30 +0000 (0:00:00.666) 0:00:24.204 ***** 2026-01-07 00:52:17.521037 | orchestrator | 2026-01-07 00:52:17.521041 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.521045 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:00.390) 0:00:24.594 ***** 2026-01-07 00:52:17.521049 | orchestrator | 2026-01-07 00:52:17.521052 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:52:17.521056 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:00.489) 0:00:25.084 ***** 2026-01-07 00:52:17.521060 | orchestrator | 2026-01-07 00:52:17.521064 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-07 00:52:17.521068 | orchestrator | Wednesday 07 January 2026 00:51:32 +0000 (0:00:00.379) 0:00:25.463 ***** 2026-01-07 00:52:17.521071 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:17.521075 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:17.521079 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:17.521083 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:17.521087 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:17.521090 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:17.521094 | orchestrator | 2026-01-07 00:52:17.521098 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-07 00:52:17.521102 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:07.452) 0:00:32.916 ***** 2026-01-07 00:52:17.521106 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:17.521110 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:17.521113 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:17.521117 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:17.521121 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:17.521125 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:17.521129 | orchestrator | 2026-01-07 00:52:17.521132 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:52:17.521136 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:01.691) 0:00:34.608 ***** 2026-01-07 00:52:17.521140 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:17.521144 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:17.521148 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:17.521153 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:17.521160 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:17.521164 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:17.521169 | orchestrator | 2026-01-07 00:52:17.521173 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-07 00:52:17.521178 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:10.607) 0:00:45.215 ***** 2026-01-07 00:52:17.521182 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-07 00:52:17.521187 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-07 00:52:17.521192 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-07 00:52:17.521196 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-07 00:52:17.521200 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-07 00:52:17.521207 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-07 00:52:17.521212 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-07 00:52:17.521216 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-07 00:52:17.521221 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-07 00:52:17.521225 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-07 00:52:17.521229 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-07 00:52:17.521234 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-07 00:52:17.521238 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521242 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521246 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521251 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521255 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521259 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:52:17.521264 | orchestrator | 2026-01-07 00:52:17.521268 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-07 00:52:17.521273 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:06.985) 0:00:52.201 ***** 2026-01-07 00:52:17.521277 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-07 00:52:17.521281 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-07 00:52:17.521292 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:17.521297 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:17.521301 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-07 00:52:17.521305 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:17.521310 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-07 00:52:17.521314 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-07 00:52:17.521319 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-07 00:52:17.521323 | orchestrator | 2026-01-07 00:52:17.521327 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-07 00:52:17.521335 | orchestrator | Wednesday 07 January 2026 00:52:01 +0000 (0:00:03.160) 0:00:55.362 ***** 2026-01-07 00:52:17.521339 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:52:17.521348 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:17.521353 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:52:17.521357 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:17.521361 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:52:17.521366 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:17.521370 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:52:17.521375 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:52:17.521379 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:52:17.521384 | orchestrator | 2026-01-07 00:52:17.521389 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:52:17.521393 | orchestrator | Wednesday 07 January 2026 00:52:05 +0000 (0:00:04.036) 0:00:59.398 ***** 2026-01-07 00:52:17.521397 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:17.521401 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:17.521406 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:17.521410 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:17.521415 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:17.521420 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:17.521424 | orchestrator | 2026-01-07 00:52:17.521431 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:52:17.521436 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:52:17.521441 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:52:17.521445 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:52:17.521492 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:52:17.521497 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:52:17.521504 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:52:17.521509 | orchestrator | 2026-01-07 00:52:17.521513 | orchestrator | 2026-01-07 00:52:17.521518 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:52:17.521523 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:08.762) 0:01:08.161 ***** 2026-01-07 00:52:17.521527 | orchestrator | =============================================================================== 2026-01-07 00:52:17.521532 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.37s 2026-01-07 00:52:17.521536 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.45s 2026-01-07 00:52:17.521541 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.99s 2026-01-07 00:52:17.521545 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.04s 2026-01-07 00:52:17.521550 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.04s 2026-01-07 00:52:17.521555 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.16s 2026-01-07 00:52:17.521559 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.85s 2026-01-07 00:52:17.521563 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.69s 2026-01-07 00:52:17.521570 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.54s 2026-01-07 00:52:17.521574 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.41s 2026-01-07 00:52:17.521578 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.07s 2026-01-07 00:52:17.521581 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.82s 2026-01-07 00:52:17.521585 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.69s 2026-01-07 00:52:17.521589 | orchestrator | module-load : Load modules ---------------------------------------------- 1.51s 2026-01-07 00:52:17.521593 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.38s 2026-01-07 00:52:17.521596 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.24s 2026-01-07 00:52:17.521600 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-01-07 00:52:17.521604 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-01-07 00:52:17.521608 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:17.521612 | orchestrator | 2026-01-07 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:20.554282 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:20.558083 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:20.560524 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:20.562215 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:20.563978 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:20.564186 | orchestrator | 2026-01-07 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:23.608416 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:23.608544 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:23.608553 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:23.608575 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:23.611709 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:23.611769 | orchestrator | 2026-01-07 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:26.643811 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:26.644881 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:26.646385 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:26.647534 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:26.648611 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:26.648722 | orchestrator | 2026-01-07 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:29.695143 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:29.695339 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:29.696555 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:29.698372 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:29.698909 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:29.699061 | orchestrator | 2026-01-07 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:32.741798 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:32.742706 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:32.743330 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:32.744501 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:32.745795 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:32.745838 | orchestrator | 2026-01-07 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:35.924290 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:35.924387 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:35.924397 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:35.924405 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:35.924497 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:35.924503 | orchestrator | 2026-01-07 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:38.970170 | orchestrator | 2026-01-07 00:52:38 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:38.971611 | orchestrator | 2026-01-07 00:52:38 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:38.972817 | orchestrator | 2026-01-07 00:52:38 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:38.974159 | orchestrator | 2026-01-07 00:52:38 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:38.975537 | orchestrator | 2026-01-07 00:52:38 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:38.975763 | orchestrator | 2026-01-07 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:42.012487 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:42.019886 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:42.032472 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:42.034501 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:42.035147 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:42.035198 | orchestrator | 2026-01-07 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:45.071807 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:45.075515 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:45.078743 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:45.080584 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:45.082516 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:45.083087 | orchestrator | 2026-01-07 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:48.121543 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:48.123349 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:48.124273 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:48.127045 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:48.128189 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:48.128258 | orchestrator | 2026-01-07 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:51.203534 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:51.203679 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:51.203688 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:51.203694 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:51.203699 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:51.203706 | orchestrator | 2026-01-07 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:54.250832 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:54.250870 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:54.252040 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:54.253882 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:54.255093 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:54.255133 | orchestrator | 2026-01-07 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:57.298249 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:52:57.298789 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:52:57.300454 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:52:57.303334 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:52:57.304520 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:52:57.304558 | orchestrator | 2026-01-07 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:00.617158 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:00.617242 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:00.617265 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:00.617272 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:00.617277 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:53:00.617282 | orchestrator | 2026-01-07 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:03.557045 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:03.562773 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:03.567310 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:03.570830 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:03.802126 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:53:03.802208 | orchestrator | 2026-01-07 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:06.619488 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:06.620496 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:06.621435 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:06.622851 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:06.623720 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:53:06.624225 | orchestrator | 2026-01-07 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:09.665999 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:09.666846 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:09.668569 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:09.671114 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:09.673884 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:53:09.675096 | orchestrator | 2026-01-07 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:12.706398 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:12.706993 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:12.707775 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:12.708768 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:12.709582 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state STARTED 2026-01-07 00:53:12.709620 | orchestrator | 2026-01-07 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:15.740263 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:15.741433 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:15.743724 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task a82187dd-de14-430c-bbd0-cbccde6204d7 is in state STARTED 2026-01-07 00:53:15.744524 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 6966f62d-294d-4a17-a809-5884d4f6f999 is in state STARTED 2026-01-07 00:53:15.747672 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:15.748268 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:15.749603 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 0ef364d7-23df-47dd-9ad8-5c472fda0322 is in state SUCCESS 2026-01-07 00:53:15.750291 | orchestrator | 2026-01-07 00:53:15.751271 | orchestrator | 2026-01-07 00:53:15.751317 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-07 00:53:15.751341 | orchestrator | 2026-01-07 00:53:15.751346 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-07 00:53:15.751350 | orchestrator | Wednesday 07 January 2026 00:48:36 +0000 (0:00:00.193) 0:00:00.193 ***** 2026-01-07 00:53:15.751355 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.751360 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.751364 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.751368 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.751373 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.751379 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.751385 | orchestrator | 2026-01-07 00:53:15.751391 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-07 00:53:15.751398 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:00.777) 0:00:00.970 ***** 2026-01-07 00:53:15.751500 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.751510 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.751516 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.751522 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.751527 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.751533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.751539 | orchestrator | 2026-01-07 00:53:15.751545 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-07 00:53:15.751551 | orchestrator | Wednesday 07 January 2026 00:48:38 +0000 (0:00:00.677) 0:00:01.648 ***** 2026-01-07 00:53:15.751558 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.751675 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.751685 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.751689 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.751692 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.751696 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.751701 | orchestrator | 2026-01-07 00:53:15.751704 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-07 00:53:15.751708 | orchestrator | Wednesday 07 January 2026 00:48:38 +0000 (0:00:00.740) 0:00:02.388 ***** 2026-01-07 00:53:15.751730 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.751734 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.751737 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.751741 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.751744 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.751748 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.751752 | orchestrator | 2026-01-07 00:53:15.751756 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-07 00:53:15.751760 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:01.864) 0:00:04.252 ***** 2026-01-07 00:53:15.751764 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.751768 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.751771 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.751775 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.751779 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.751782 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.751786 | orchestrator | 2026-01-07 00:53:15.751790 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-07 00:53:15.751794 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:01.353) 0:00:05.606 ***** 2026-01-07 00:53:15.751798 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.751801 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.751805 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.751808 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.751812 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.751816 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.751819 | orchestrator | 2026-01-07 00:53:15.751824 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-07 00:53:15.751827 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.892) 0:00:06.498 ***** 2026-01-07 00:53:15.751831 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.751835 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.751839 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.751842 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.751846 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.751850 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.751853 | orchestrator | 2026-01-07 00:53:15.751857 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-07 00:53:15.751861 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:00.791) 0:00:07.290 ***** 2026-01-07 00:53:15.751865 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.751868 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.751872 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.751875 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.751879 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.751883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.751887 | orchestrator | 2026-01-07 00:53:15.751893 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-07 00:53:15.751899 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:00.658) 0:00:07.948 ***** 2026-01-07 00:53:15.751904 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.751914 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.751923 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.751929 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.751935 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.751941 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.751947 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.751953 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.751958 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.751985 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.752002 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.752009 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.752015 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.752021 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.752027 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.752033 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:53:15.752040 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:53:15.752046 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.752052 | orchestrator | 2026-01-07 00:53:15.752059 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-07 00:53:15.752064 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.675) 0:00:08.624 ***** 2026-01-07 00:53:15.752067 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.752071 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.752075 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.752079 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.752083 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.752087 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.752090 | orchestrator | 2026-01-07 00:53:15.752094 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-07 00:53:15.752100 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:01.470) 0:00:10.095 ***** 2026-01-07 00:53:15.752103 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.752108 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.752112 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.752115 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.752119 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.752123 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.752126 | orchestrator | 2026-01-07 00:53:15.752130 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-07 00:53:15.752134 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.683) 0:00:10.778 ***** 2026-01-07 00:53:15.752138 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.752141 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.752145 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.752149 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.752152 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.752156 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.752160 | orchestrator | 2026-01-07 00:53:15.752163 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-07 00:53:15.752167 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:06.116) 0:00:16.894 ***** 2026-01-07 00:53:15.752171 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.752175 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.752179 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.752182 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.752186 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.752192 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.752199 | orchestrator | 2026-01-07 00:53:15.752207 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-07 00:53:15.752214 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:01.405) 0:00:18.300 ***** 2026-01-07 00:53:15.752220 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.752226 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.752232 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.752237 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.752245 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.752257 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.752263 | orchestrator | 2026-01-07 00:53:15.752269 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-07 00:53:15.752277 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:02.219) 0:00:20.520 ***** 2026-01-07 00:53:15.752284 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.752290 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.752298 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.752304 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.752311 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.752317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.752350 | orchestrator | 2026-01-07 00:53:15.752357 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-07 00:53:15.752364 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:01.196) 0:00:21.717 ***** 2026-01-07 00:53:15.752370 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-07 00:53:15.752376 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-07 00:53:15.752382 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.752389 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-07 00:53:15.752395 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-07 00:53:15.752401 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.752408 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-07 00:53:15.752414 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-07 00:53:15.752421 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.752427 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-07 00:53:15.752433 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-07 00:53:15.752439 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-07 00:53:15.754453 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-07 00:53:15.754511 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754516 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754521 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-07 00:53:15.754525 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-07 00:53:15.754529 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754533 | orchestrator | 2026-01-07 00:53:15.754538 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-07 00:53:15.754558 | orchestrator | Wednesday 07 January 2026 00:48:59 +0000 (0:00:01.542) 0:00:23.260 ***** 2026-01-07 00:53:15.754563 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.754567 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.754571 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.754575 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754578 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754586 | orchestrator | 2026-01-07 00:53:15.754590 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-07 00:53:15.754595 | orchestrator | Wednesday 07 January 2026 00:49:01 +0000 (0:00:02.180) 0:00:25.440 ***** 2026-01-07 00:53:15.754599 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.754603 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.754607 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.754610 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754614 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754618 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754622 | orchestrator | 2026-01-07 00:53:15.754626 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-07 00:53:15.754630 | orchestrator | 2026-01-07 00:53:15.754633 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-07 00:53:15.754652 | orchestrator | Wednesday 07 January 2026 00:49:04 +0000 (0:00:02.145) 0:00:27.585 ***** 2026-01-07 00:53:15.754656 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.754660 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.754664 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.754668 | orchestrator | 2026-01-07 00:53:15.754672 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-07 00:53:15.754675 | orchestrator | Wednesday 07 January 2026 00:49:06 +0000 (0:00:02.759) 0:00:30.345 ***** 2026-01-07 00:53:15.754679 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.754683 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.754687 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.754690 | orchestrator | 2026-01-07 00:53:15.754694 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-07 00:53:15.754698 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:01.202) 0:00:31.548 ***** 2026-01-07 00:53:15.754702 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.754705 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.754710 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.754713 | orchestrator | 2026-01-07 00:53:15.754717 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-07 00:53:15.754721 | orchestrator | Wednesday 07 January 2026 00:49:09 +0000 (0:00:01.022) 0:00:32.570 ***** 2026-01-07 00:53:15.754725 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.754729 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.754736 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.754748 | orchestrator | 2026-01-07 00:53:15.754753 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-07 00:53:15.754757 | orchestrator | Wednesday 07 January 2026 00:49:10 +0000 (0:00:01.308) 0:00:33.878 ***** 2026-01-07 00:53:15.754766 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754770 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754774 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754778 | orchestrator | 2026-01-07 00:53:15.754781 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-07 00:53:15.754785 | orchestrator | Wednesday 07 January 2026 00:49:10 +0000 (0:00:00.687) 0:00:34.565 ***** 2026-01-07 00:53:15.754789 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.754793 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.754796 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.754800 | orchestrator | 2026-01-07 00:53:15.754804 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-07 00:53:15.754808 | orchestrator | Wednesday 07 January 2026 00:49:12 +0000 (0:00:01.092) 0:00:35.658 ***** 2026-01-07 00:53:15.754811 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.754815 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.754819 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.754822 | orchestrator | 2026-01-07 00:53:15.754826 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-07 00:53:15.754830 | orchestrator | Wednesday 07 January 2026 00:49:14 +0000 (0:00:02.053) 0:00:37.712 ***** 2026-01-07 00:53:15.754834 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:15.754838 | orchestrator | 2026-01-07 00:53:15.754841 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-07 00:53:15.754845 | orchestrator | Wednesday 07 January 2026 00:49:14 +0000 (0:00:00.681) 0:00:38.394 ***** 2026-01-07 00:53:15.754849 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.754853 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.754856 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.754860 | orchestrator | 2026-01-07 00:53:15.754864 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-07 00:53:15.754868 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:02.446) 0:00:40.840 ***** 2026-01-07 00:53:15.754871 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.754879 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754887 | orchestrator | 2026-01-07 00:53:15.754891 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-07 00:53:15.754894 | orchestrator | Wednesday 07 January 2026 00:49:18 +0000 (0:00:01.160) 0:00:42.000 ***** 2026-01-07 00:53:15.754898 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754902 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754905 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.754909 | orchestrator | 2026-01-07 00:53:15.754913 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-07 00:53:15.754917 | orchestrator | Wednesday 07 January 2026 00:49:20 +0000 (0:00:01.764) 0:00:43.764 ***** 2026-01-07 00:53:15.754920 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754924 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754928 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.754932 | orchestrator | 2026-01-07 00:53:15.754936 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-07 00:53:15.754943 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:01.599) 0:00:45.363 ***** 2026-01-07 00:53:15.754947 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754951 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754955 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754958 | orchestrator | 2026-01-07 00:53:15.754962 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-07 00:53:15.754966 | orchestrator | Wednesday 07 January 2026 00:49:22 +0000 (0:00:00.772) 0:00:46.135 ***** 2026-01-07 00:53:15.754970 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.754974 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.754977 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.754981 | orchestrator | 2026-01-07 00:53:15.754985 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-07 00:53:15.754989 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.847) 0:00:46.982 ***** 2026-01-07 00:53:15.754993 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.754996 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755000 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755004 | orchestrator | 2026-01-07 00:53:15.755008 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-07 00:53:15.755011 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:01.971) 0:00:48.954 ***** 2026-01-07 00:53:15.755015 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755019 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755023 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755027 | orchestrator | 2026-01-07 00:53:15.755030 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-07 00:53:15.755034 | orchestrator | Wednesday 07 January 2026 00:49:27 +0000 (0:00:02.361) 0:00:51.315 ***** 2026-01-07 00:53:15.755038 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755042 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755046 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755050 | orchestrator | 2026-01-07 00:53:15.755054 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-07 00:53:15.755058 | orchestrator | Wednesday 07 January 2026 00:49:28 +0000 (0:00:01.193) 0:00:52.509 ***** 2026-01-07 00:53:15.755062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:53:15.755067 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:53:15.755073 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:53:15.755077 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:53:15.755084 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:53:15.755088 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:53:15.755092 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:53:15.755096 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:53:15.755100 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:53:15.755103 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:53:15.755107 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:53:15.755111 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:53:15.755115 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755119 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755123 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755126 | orchestrator | 2026-01-07 00:53:15.755130 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-07 00:53:15.755134 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:43.171) 0:01:35.681 ***** 2026-01-07 00:53:15.755138 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.755142 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.755145 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.755149 | orchestrator | 2026-01-07 00:53:15.755153 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-07 00:53:15.755157 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:00.283) 0:01:35.964 ***** 2026-01-07 00:53:15.755161 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755164 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755168 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755172 | orchestrator | 2026-01-07 00:53:15.755176 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-07 00:53:15.755179 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:01.138) 0:01:37.103 ***** 2026-01-07 00:53:15.755184 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755190 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755196 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755203 | orchestrator | 2026-01-07 00:53:15.755210 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-07 00:53:15.755214 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:01.371) 0:01:38.475 ***** 2026-01-07 00:53:15.755217 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755221 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755225 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755229 | orchestrator | 2026-01-07 00:53:15.755232 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-07 00:53:15.755236 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:25.358) 0:02:03.833 ***** 2026-01-07 00:53:15.755240 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755244 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755247 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755251 | orchestrator | 2026-01-07 00:53:15.755255 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-07 00:53:15.755259 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.897) 0:02:04.731 ***** 2026-01-07 00:53:15.755266 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755270 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755273 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755277 | orchestrator | 2026-01-07 00:53:15.755281 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-07 00:53:15.755285 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.605) 0:02:05.336 ***** 2026-01-07 00:53:15.755288 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755292 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755296 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755299 | orchestrator | 2026-01-07 00:53:15.755303 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-07 00:53:15.755307 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:00.890) 0:02:06.227 ***** 2026-01-07 00:53:15.755311 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755315 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755359 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755364 | orchestrator | 2026-01-07 00:53:15.755367 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-07 00:53:15.755371 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:01.299) 0:02:07.526 ***** 2026-01-07 00:53:15.755375 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755379 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755382 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755386 | orchestrator | 2026-01-07 00:53:15.755390 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-07 00:53:15.755393 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.281) 0:02:07.808 ***** 2026-01-07 00:53:15.755397 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755405 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755409 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755412 | orchestrator | 2026-01-07 00:53:15.755416 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-07 00:53:15.755420 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.556) 0:02:08.364 ***** 2026-01-07 00:53:15.755424 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755427 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755431 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755435 | orchestrator | 2026-01-07 00:53:15.755439 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-07 00:53:15.755442 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:00.606) 0:02:08.971 ***** 2026-01-07 00:53:15.755446 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755450 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755453 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755457 | orchestrator | 2026-01-07 00:53:15.755461 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-07 00:53:15.755465 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:01.096) 0:02:10.067 ***** 2026-01-07 00:53:15.755469 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:15.755472 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:15.755477 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:15.755483 | orchestrator | 2026-01-07 00:53:15.755488 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-07 00:53:15.755492 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.753) 0:02:10.821 ***** 2026-01-07 00:53:15.755496 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.755500 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.755503 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.755507 | orchestrator | 2026-01-07 00:53:15.755511 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-07 00:53:15.755514 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.286) 0:02:11.108 ***** 2026-01-07 00:53:15.755518 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.755527 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.755531 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.755534 | orchestrator | 2026-01-07 00:53:15.755538 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-07 00:53:15.755542 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.269) 0:02:11.377 ***** 2026-01-07 00:53:15.755546 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755549 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755553 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755557 | orchestrator | 2026-01-07 00:53:15.755561 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-07 00:53:15.755564 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:00.821) 0:02:12.199 ***** 2026-01-07 00:53:15.755568 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.755572 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.755576 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.755579 | orchestrator | 2026-01-07 00:53:15.755583 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-07 00:53:15.755587 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:00.673) 0:02:12.873 ***** 2026-01-07 00:53:15.755591 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:53:15.755598 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:53:15.755602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:53:15.755606 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:53:15.755610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:53:15.755614 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:53:15.755618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:53:15.755621 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:53:15.755625 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:53:15.755629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-07 00:53:15.755633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:53:15.755636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:53:15.755640 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-07 00:53:15.755644 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:53:15.755647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:53:15.755651 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:53:15.755655 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:53:15.755659 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:53:15.755663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:53:15.755670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:53:15.755674 | orchestrator | 2026-01-07 00:53:15.755678 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-07 00:53:15.755682 | orchestrator | 2026-01-07 00:53:15.755685 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-07 00:53:15.755695 | orchestrator | Wednesday 07 January 2026 00:50:52 +0000 (0:00:03.258) 0:02:16.131 ***** 2026-01-07 00:53:15.755699 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.755703 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.755706 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.755710 | orchestrator | 2026-01-07 00:53:15.755714 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-07 00:53:15.755718 | orchestrator | Wednesday 07 January 2026 00:50:53 +0000 (0:00:00.693) 0:02:16.825 ***** 2026-01-07 00:53:15.755721 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.755725 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.755729 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.755733 | orchestrator | 2026-01-07 00:53:15.755736 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-07 00:53:15.755740 | orchestrator | Wednesday 07 January 2026 00:50:53 +0000 (0:00:00.621) 0:02:17.446 ***** 2026-01-07 00:53:15.755744 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.755748 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.755751 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.755755 | orchestrator | 2026-01-07 00:53:15.755759 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-07 00:53:15.755762 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:00.349) 0:02:17.795 ***** 2026-01-07 00:53:15.755769 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:53:15.755775 | orchestrator | 2026-01-07 00:53:15.755780 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-07 00:53:15.755789 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:00.828) 0:02:18.624 ***** 2026-01-07 00:53:15.755798 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.755803 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.755809 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.755815 | orchestrator | 2026-01-07 00:53:15.755820 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-07 00:53:15.755826 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:00.313) 0:02:18.937 ***** 2026-01-07 00:53:15.755832 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.755838 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.755844 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.755849 | orchestrator | 2026-01-07 00:53:15.755855 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-07 00:53:15.755861 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:00.282) 0:02:19.220 ***** 2026-01-07 00:53:15.755867 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.755873 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.755879 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.755885 | orchestrator | 2026-01-07 00:53:15.755890 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-07 00:53:15.755896 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:00.317) 0:02:19.538 ***** 2026-01-07 00:53:15.755902 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.755908 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.755915 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.755921 | orchestrator | 2026-01-07 00:53:15.755932 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-07 00:53:15.755938 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:00.839) 0:02:20.377 ***** 2026-01-07 00:53:15.755944 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.755951 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.755957 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.755964 | orchestrator | 2026-01-07 00:53:15.755970 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-07 00:53:15.755973 | orchestrator | Wednesday 07 January 2026 00:50:57 +0000 (0:00:01.065) 0:02:21.443 ***** 2026-01-07 00:53:15.755982 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.755986 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.755990 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.755993 | orchestrator | 2026-01-07 00:53:15.755997 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-07 00:53:15.756001 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:01.162) 0:02:22.605 ***** 2026-01-07 00:53:15.756005 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:15.756009 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:15.756012 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:15.756016 | orchestrator | 2026-01-07 00:53:15.756020 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:53:15.756024 | orchestrator | 2026-01-07 00:53:15.756027 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:53:15.756031 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:10.894) 0:02:33.499 ***** 2026-01-07 00:53:15.756035 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756039 | orchestrator | 2026-01-07 00:53:15.756043 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:53:15.756046 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:00.716) 0:02:34.216 ***** 2026-01-07 00:53:15.756051 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756057 | orchestrator | 2026-01-07 00:53:15.756062 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:53:15.756068 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.421) 0:02:34.637 ***** 2026-01-07 00:53:15.756073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:53:15.756078 | orchestrator | 2026-01-07 00:53:15.756083 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:53:15.756089 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.558) 0:02:35.196 ***** 2026-01-07 00:53:15.756094 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756100 | orchestrator | 2026-01-07 00:53:15.756109 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:53:15.756114 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.963) 0:02:36.160 ***** 2026-01-07 00:53:15.756119 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756125 | orchestrator | 2026-01-07 00:53:15.756131 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:53:15.756137 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.658) 0:02:36.818 ***** 2026-01-07 00:53:15.756143 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:53:15.756149 | orchestrator | 2026-01-07 00:53:15.756155 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:53:15.756161 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:01.806) 0:02:38.625 ***** 2026-01-07 00:53:15.756166 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:53:15.756172 | orchestrator | 2026-01-07 00:53:15.756178 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:53:15.756185 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:00.861) 0:02:39.486 ***** 2026-01-07 00:53:15.756190 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756196 | orchestrator | 2026-01-07 00:53:15.756202 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:53:15.756208 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:00.853) 0:02:40.340 ***** 2026-01-07 00:53:15.756215 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756221 | orchestrator | 2026-01-07 00:53:15.756226 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-07 00:53:15.756230 | orchestrator | 2026-01-07 00:53:15.756234 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-07 00:53:15.756237 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:00.498) 0:02:40.838 ***** 2026-01-07 00:53:15.756247 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756250 | orchestrator | 2026-01-07 00:53:15.756254 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-07 00:53:15.756258 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:00.171) 0:02:41.010 ***** 2026-01-07 00:53:15.756262 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:53:15.756266 | orchestrator | 2026-01-07 00:53:15.756270 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-07 00:53:15.756273 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:00.295) 0:02:41.306 ***** 2026-01-07 00:53:15.756277 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756284 | orchestrator | 2026-01-07 00:53:15.756290 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-07 00:53:15.756296 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:01.000) 0:02:42.306 ***** 2026-01-07 00:53:15.756302 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756308 | orchestrator | 2026-01-07 00:53:15.756313 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-07 00:53:15.756337 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:01.815) 0:02:44.122 ***** 2026-01-07 00:53:15.756343 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756348 | orchestrator | 2026-01-07 00:53:15.756354 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-07 00:53:15.756360 | orchestrator | Wednesday 07 January 2026 00:51:21 +0000 (0:00:01.351) 0:02:45.474 ***** 2026-01-07 00:53:15.756366 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756372 | orchestrator | 2026-01-07 00:53:15.756385 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-07 00:53:15.756392 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:00.502) 0:02:45.976 ***** 2026-01-07 00:53:15.756398 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756405 | orchestrator | 2026-01-07 00:53:15.756410 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-07 00:53:15.756416 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:09.515) 0:02:55.492 ***** 2026-01-07 00:53:15.756422 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.756429 | orchestrator | 2026-01-07 00:53:15.756434 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-07 00:53:15.756440 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:15.758) 0:03:11.251 ***** 2026-01-07 00:53:15.756446 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.756452 | orchestrator | 2026-01-07 00:53:15.756459 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-07 00:53:15.756465 | orchestrator | 2026-01-07 00:53:15.756472 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-07 00:53:15.756478 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:00.597) 0:03:11.848 ***** 2026-01-07 00:53:15.756484 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.756490 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.756496 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.756503 | orchestrator | 2026-01-07 00:53:15.756508 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-07 00:53:15.756512 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:00.301) 0:03:12.150 ***** 2026-01-07 00:53:15.756515 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.756519 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.756523 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756527 | orchestrator | 2026-01-07 00:53:15.756531 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-07 00:53:15.756534 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.587) 0:03:12.738 ***** 2026-01-07 00:53:15.756538 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:15.756547 | orchestrator | 2026-01-07 00:53:15.756551 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-07 00:53:15.756555 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.522) 0:03:13.261 ***** 2026-01-07 00:53:15.756559 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756562 | orchestrator | 2026-01-07 00:53:15.756571 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-07 00:53:15.756575 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:00.626) 0:03:13.888 ***** 2026-01-07 00:53:15.756578 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756582 | orchestrator | 2026-01-07 00:53:15.756586 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-07 00:53:15.756590 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:00.651) 0:03:14.539 ***** 2026-01-07 00:53:15.756593 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756597 | orchestrator | 2026-01-07 00:53:15.756601 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-07 00:53:15.756605 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.088) 0:03:14.628 ***** 2026-01-07 00:53:15.756609 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756612 | orchestrator | 2026-01-07 00:53:15.756616 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-07 00:53:15.756620 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.750) 0:03:15.379 ***** 2026-01-07 00:53:15.756623 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756627 | orchestrator | 2026-01-07 00:53:15.756633 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-07 00:53:15.756639 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.105) 0:03:15.485 ***** 2026-01-07 00:53:15.756645 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756652 | orchestrator | 2026-01-07 00:53:15.756658 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-07 00:53:15.756664 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:00.095) 0:03:15.580 ***** 2026-01-07 00:53:15.756670 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756676 | orchestrator | 2026-01-07 00:53:15.756682 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-07 00:53:15.756687 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:00.141) 0:03:15.721 ***** 2026-01-07 00:53:15.756690 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756696 | orchestrator | 2026-01-07 00:53:15.756702 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-07 00:53:15.756707 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:00.109) 0:03:15.831 ***** 2026-01-07 00:53:15.756713 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756719 | orchestrator | 2026-01-07 00:53:15.756725 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-07 00:53:15.756731 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:06.424) 0:03:22.255 ***** 2026-01-07 00:53:15.756736 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-07 00:53:15.756743 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-07 00:53:15.756749 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-07 00:53:15.756755 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-07 00:53:15.756760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-07 00:53:15.756766 | orchestrator | 2026-01-07 00:53:15.756772 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-07 00:53:15.756778 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:44.604) 0:04:06.860 ***** 2026-01-07 00:53:15.756790 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756796 | orchestrator | 2026-01-07 00:53:15.756802 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-07 00:53:15.756817 | orchestrator | Wednesday 07 January 2026 00:52:44 +0000 (0:00:01.273) 0:04:08.133 ***** 2026-01-07 00:53:15.756823 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756828 | orchestrator | 2026-01-07 00:53:15.756833 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-07 00:53:15.756839 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:01.591) 0:04:09.724 ***** 2026-01-07 00:53:15.756845 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:53:15.756851 | orchestrator | 2026-01-07 00:53:15.756856 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-07 00:53:15.756862 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:01.059) 0:04:10.784 ***** 2026-01-07 00:53:15.756867 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756873 | orchestrator | 2026-01-07 00:53:15.756879 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-07 00:53:15.756886 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:00.216) 0:04:11.000 ***** 2026-01-07 00:53:15.756892 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-07 00:53:15.756898 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-07 00:53:15.756904 | orchestrator | 2026-01-07 00:53:15.756910 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-07 00:53:15.756916 | orchestrator | Wednesday 07 January 2026 00:52:49 +0000 (0:00:01.887) 0:04:12.888 ***** 2026-01-07 00:53:15.756922 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.756928 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.756934 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.756940 | orchestrator | 2026-01-07 00:53:15.756946 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-07 00:53:15.756951 | orchestrator | Wednesday 07 January 2026 00:52:49 +0000 (0:00:00.326) 0:04:13.214 ***** 2026-01-07 00:53:15.756957 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.756963 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.756968 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.756975 | orchestrator | 2026-01-07 00:53:15.756981 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-07 00:53:15.756987 | orchestrator | 2026-01-07 00:53:15.756992 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-07 00:53:15.757003 | orchestrator | Wednesday 07 January 2026 00:52:50 +0000 (0:00:01.256) 0:04:14.470 ***** 2026-01-07 00:53:15.757009 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:15.757015 | orchestrator | 2026-01-07 00:53:15.757021 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-07 00:53:15.757027 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:00.194) 0:04:14.665 ***** 2026-01-07 00:53:15.757033 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:53:15.757039 | orchestrator | 2026-01-07 00:53:15.757045 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-07 00:53:15.757051 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:00.263) 0:04:14.928 ***** 2026-01-07 00:53:15.757057 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:15.757063 | orchestrator | 2026-01-07 00:53:15.757069 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-07 00:53:15.757076 | orchestrator | 2026-01-07 00:53:15.757082 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-07 00:53:15.757088 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:05.723) 0:04:20.652 ***** 2026-01-07 00:53:15.757094 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:15.757100 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:15.757107 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:15.757114 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:15.757120 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:15.757134 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:15.757141 | orchestrator | 2026-01-07 00:53:15.757147 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-07 00:53:15.757153 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:00.838) 0:04:21.490 ***** 2026-01-07 00:53:15.757159 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:53:15.757165 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:53:15.757170 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:53:15.757176 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:53:15.757182 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:53:15.757188 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:53:15.757194 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:53:15.757201 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:53:15.757207 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:53:15.757213 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:53:15.757219 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:53:15.757225 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:53:15.757242 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:53:15.757250 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:53:15.757256 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:53:15.757263 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:53:15.757268 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:53:15.757274 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:53:15.757281 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:53:15.757287 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:53:15.757294 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:53:15.757300 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:53:15.757306 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:53:15.757311 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:53:15.757335 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:53:15.757342 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:53:15.757348 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:53:15.757354 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:53:15.757359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:53:15.757365 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:53:15.757372 | orchestrator | 2026-01-07 00:53:15.757378 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-07 00:53:15.757390 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:15.205) 0:04:36.696 ***** 2026-01-07 00:53:15.757395 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.757406 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.757413 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.757420 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.757426 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.757431 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.757437 | orchestrator | 2026-01-07 00:53:15.757443 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-07 00:53:15.757449 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:00.585) 0:04:37.281 ***** 2026-01-07 00:53:15.757456 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:53:15.757461 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:53:15.757467 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:53:15.757473 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:15.757479 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:15.757485 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:15.757491 | orchestrator | 2026-01-07 00:53:15.757497 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:15.757503 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:15.757512 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 00:53:15.757519 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:53:15.757525 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:53:15.757530 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:53:15.757536 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:53:15.757542 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:53:15.757548 | orchestrator | 2026-01-07 00:53:15.757554 | orchestrator | 2026-01-07 00:53:15.757560 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:15.757566 | orchestrator | Wednesday 07 January 2026 00:53:14 +0000 (0:00:00.519) 0:04:37.801 ***** 2026-01-07 00:53:15.757572 | orchestrator | =============================================================================== 2026-01-07 00:53:15.757578 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.60s 2026-01-07 00:53:15.757584 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.17s 2026-01-07 00:53:15.757590 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.36s 2026-01-07 00:53:15.757603 | orchestrator | kubectl : Install required packages ------------------------------------ 15.76s 2026-01-07 00:53:15.757610 | orchestrator | Manage labels ---------------------------------------------------------- 15.21s 2026-01-07 00:53:15.757616 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.89s 2026-01-07 00:53:15.757623 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.52s 2026-01-07 00:53:15.757629 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.42s 2026-01-07 00:53:15.757635 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.12s 2026-01-07 00:53:15.757641 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.72s 2026-01-07 00:53:15.757655 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.26s 2026-01-07 00:53:15.757661 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.76s 2026-01-07 00:53:15.757668 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.45s 2026-01-07 00:53:15.757674 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.36s 2026-01-07 00:53:15.757680 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.22s 2026-01-07 00:53:15.757685 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.18s 2026-01-07 00:53:15.757691 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.15s 2026-01-07 00:53:15.757697 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.05s 2026-01-07 00:53:15.757703 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.97s 2026-01-07 00:53:15.757709 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.89s 2026-01-07 00:53:15.757715 | orchestrator | 2026-01-07 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:18.804838 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:18.806447 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:18.809729 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task a82187dd-de14-430c-bbd0-cbccde6204d7 is in state STARTED 2026-01-07 00:53:18.810860 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 6966f62d-294d-4a17-a809-5884d4f6f999 is in state STARTED 2026-01-07 00:53:18.811957 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:18.815139 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:18.815204 | orchestrator | 2026-01-07 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:21.857932 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:21.858424 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:21.859236 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task a82187dd-de14-430c-bbd0-cbccde6204d7 is in state SUCCESS 2026-01-07 00:53:21.860115 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 6966f62d-294d-4a17-a809-5884d4f6f999 is in state STARTED 2026-01-07 00:53:21.861158 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:21.862173 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:21.862209 | orchestrator | 2026-01-07 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:24.904115 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:24.904425 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:24.909150 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 6966f62d-294d-4a17-a809-5884d4f6f999 is in state STARTED 2026-01-07 00:53:24.909248 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:24.909810 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:24.909875 | orchestrator | 2026-01-07 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:27.949055 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:27.949546 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:27.950122 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 6966f62d-294d-4a17-a809-5884d4f6f999 is in state SUCCESS 2026-01-07 00:53:27.950851 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:27.951553 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:27.951612 | orchestrator | 2026-01-07 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:30.982478 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:30.984400 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:30.985326 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:30.986469 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:30.986515 | orchestrator | 2026-01-07 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:34.013966 | orchestrator | 2026-01-07 00:53:34 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:34.015431 | orchestrator | 2026-01-07 00:53:34 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:34.017415 | orchestrator | 2026-01-07 00:53:34 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:34.017455 | orchestrator | 2026-01-07 00:53:34 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:34.017460 | orchestrator | 2026-01-07 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:37.075121 | orchestrator | 2026-01-07 00:53:37 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:37.078254 | orchestrator | 2026-01-07 00:53:37 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:37.078322 | orchestrator | 2026-01-07 00:53:37 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:37.078328 | orchestrator | 2026-01-07 00:53:37 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:37.078332 | orchestrator | 2026-01-07 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:40.110480 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:40.112649 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:40.113662 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:40.114375 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:40.114410 | orchestrator | 2026-01-07 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:43.156887 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:43.157698 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:43.158361 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:43.160585 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:43.160621 | orchestrator | 2026-01-07 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:46.235808 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:46.237158 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:46.239044 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state STARTED 2026-01-07 00:53:46.239782 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:46.239818 | orchestrator | 2026-01-07 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:49.281716 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:49.281837 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:49.283612 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 69477eab-bb78-47c4-aff9-969b2420b321 is in state SUCCESS 2026-01-07 00:53:49.284786 | orchestrator | 2026-01-07 00:53:49.284834 | orchestrator | 2026-01-07 00:53:49.284845 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-07 00:53:49.284858 | orchestrator | 2026-01-07 00:53:49.284870 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:53:49.284881 | orchestrator | Wednesday 07 January 2026 00:53:18 +0000 (0:00:00.201) 0:00:00.201 ***** 2026-01-07 00:53:49.284893 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:53:49.284905 | orchestrator | 2026-01-07 00:53:49.284917 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:53:49.284928 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:00.739) 0:00:00.940 ***** 2026-01-07 00:53:49.284939 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:49.284950 | orchestrator | 2026-01-07 00:53:49.284961 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-07 00:53:49.284972 | orchestrator | Wednesday 07 January 2026 00:53:20 +0000 (0:00:01.227) 0:00:02.168 ***** 2026-01-07 00:53:49.284984 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:49.284994 | orchestrator | 2026-01-07 00:53:49.285005 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:49.285017 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:49.285029 | orchestrator | 2026-01-07 00:53:49.285060 | orchestrator | 2026-01-07 00:53:49.285082 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:49.285094 | orchestrator | Wednesday 07 January 2026 00:53:21 +0000 (0:00:00.514) 0:00:02.682 ***** 2026-01-07 00:53:49.285105 | orchestrator | =============================================================================== 2026-01-07 00:53:49.285116 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-01-07 00:53:49.285127 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-01-07 00:53:49.285138 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2026-01-07 00:53:49.285149 | orchestrator | 2026-01-07 00:53:49.285160 | orchestrator | 2026-01-07 00:53:49.285171 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:53:49.285182 | orchestrator | 2026-01-07 00:53:49.285224 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:53:49.285281 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:00.164) 0:00:00.164 ***** 2026-01-07 00:53:49.285294 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:49.285307 | orchestrator | 2026-01-07 00:53:49.285318 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:53:49.285328 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:00.721) 0:00:00.886 ***** 2026-01-07 00:53:49.285339 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:49.285350 | orchestrator | 2026-01-07 00:53:49.285361 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:53:49.285374 | orchestrator | Wednesday 07 January 2026 00:53:20 +0000 (0:00:00.652) 0:00:01.539 ***** 2026-01-07 00:53:49.285387 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:53:49.285401 | orchestrator | 2026-01-07 00:53:49.285414 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:53:49.285426 | orchestrator | Wednesday 07 January 2026 00:53:21 +0000 (0:00:00.730) 0:00:02.269 ***** 2026-01-07 00:53:49.285438 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:49.285451 | orchestrator | 2026-01-07 00:53:49.285463 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:53:49.285476 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:01.693) 0:00:03.963 ***** 2026-01-07 00:53:49.285489 | orchestrator | changed: [testbed-manager] 2026-01-07 00:53:49.285501 | orchestrator | 2026-01-07 00:53:49.285514 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:53:49.285527 | orchestrator | Wednesday 07 January 2026 00:53:23 +0000 (0:00:00.587) 0:00:04.550 ***** 2026-01-07 00:53:49.285539 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:53:49.285552 | orchestrator | 2026-01-07 00:53:49.285563 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:53:49.285577 | orchestrator | Wednesday 07 January 2026 00:53:25 +0000 (0:00:01.641) 0:00:06.192 ***** 2026-01-07 00:53:49.285615 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:53:49.285628 | orchestrator | 2026-01-07 00:53:49.285641 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:53:49.285653 | orchestrator | Wednesday 07 January 2026 00:53:25 +0000 (0:00:00.892) 0:00:07.084 ***** 2026-01-07 00:53:49.285665 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:49.285678 | orchestrator | 2026-01-07 00:53:49.285691 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:53:49.285703 | orchestrator | Wednesday 07 January 2026 00:53:26 +0000 (0:00:00.401) 0:00:07.486 ***** 2026-01-07 00:53:49.285716 | orchestrator | ok: [testbed-manager] 2026-01-07 00:53:49.285773 | orchestrator | 2026-01-07 00:53:49.285784 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:49.285795 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:49.285806 | orchestrator | 2026-01-07 00:53:49.285817 | orchestrator | 2026-01-07 00:53:49.285828 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:49.285839 | orchestrator | Wednesday 07 January 2026 00:53:26 +0000 (0:00:00.301) 0:00:07.788 ***** 2026-01-07 00:53:49.285849 | orchestrator | =============================================================================== 2026-01-07 00:53:49.285860 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.69s 2026-01-07 00:53:49.285871 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2026-01-07 00:53:49.285882 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.89s 2026-01-07 00:53:49.285912 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-01-07 00:53:49.285924 | orchestrator | Get home directory of operator user ------------------------------------- 0.72s 2026-01-07 00:53:49.285945 | orchestrator | Create .kube directory -------------------------------------------------- 0.65s 2026-01-07 00:53:49.285956 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.59s 2026-01-07 00:53:49.285967 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2026-01-07 00:53:49.285978 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-01-07 00:53:49.285988 | orchestrator | 2026-01-07 00:53:49.285999 | orchestrator | 2026-01-07 00:53:49.286010 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-07 00:53:49.286085 | orchestrator | 2026-01-07 00:53:49.286097 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 00:53:49.286107 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:00.285) 0:00:00.285 ***** 2026-01-07 00:53:49.286118 | orchestrator | ok: [localhost] => { 2026-01-07 00:53:49.286130 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-07 00:53:49.286141 | orchestrator | } 2026-01-07 00:53:49.286153 | orchestrator | 2026-01-07 00:53:49.286164 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-07 00:53:49.286175 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:00.111) 0:00:00.397 ***** 2026-01-07 00:53:49.286187 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-07 00:53:49.286228 | orchestrator | ...ignoring 2026-01-07 00:53:49.286269 | orchestrator | 2026-01-07 00:53:49.286288 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-07 00:53:49.286306 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:03.401) 0:00:03.799 ***** 2026-01-07 00:53:49.286324 | orchestrator | skipping: [localhost] 2026-01-07 00:53:49.286342 | orchestrator | 2026-01-07 00:53:49.286361 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-07 00:53:49.286379 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:00.201) 0:00:04.001 ***** 2026-01-07 00:53:49.286675 | orchestrator | ok: [localhost] 2026-01-07 00:53:49.286698 | orchestrator | 2026-01-07 00:53:49.286732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:53:49.286751 | orchestrator | 2026-01-07 00:53:49.286770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:53:49.286782 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:00.588) 0:00:04.590 ***** 2026-01-07 00:53:49.286792 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:49.286803 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:49.286814 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:49.286827 | orchestrator | 2026-01-07 00:53:49.286845 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:53:49.286867 | orchestrator | Wednesday 07 January 2026 00:51:32 +0000 (0:00:00.611) 0:00:05.202 ***** 2026-01-07 00:53:49.286892 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-07 00:53:49.286910 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-07 00:53:49.286927 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-07 00:53:49.286944 | orchestrator | 2026-01-07 00:53:49.286963 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-07 00:53:49.286981 | orchestrator | 2026-01-07 00:53:49.286998 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:53:49.287016 | orchestrator | Wednesday 07 January 2026 00:51:34 +0000 (0:00:01.674) 0:00:06.876 ***** 2026-01-07 00:53:49.287036 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:49.287056 | orchestrator | 2026-01-07 00:53:49.287077 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:53:49.287097 | orchestrator | Wednesday 07 January 2026 00:51:34 +0000 (0:00:00.715) 0:00:07.591 ***** 2026-01-07 00:53:49.287135 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:49.287154 | orchestrator | 2026-01-07 00:53:49.287172 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-07 00:53:49.287190 | orchestrator | Wednesday 07 January 2026 00:51:35 +0000 (0:00:01.031) 0:00:08.623 ***** 2026-01-07 00:53:49.287208 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287227 | orchestrator | 2026-01-07 00:53:49.287307 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-07 00:53:49.287328 | orchestrator | Wednesday 07 January 2026 00:51:36 +0000 (0:00:00.381) 0:00:09.004 ***** 2026-01-07 00:53:49.287342 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287355 | orchestrator | 2026-01-07 00:53:49.287368 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-07 00:53:49.287381 | orchestrator | Wednesday 07 January 2026 00:51:36 +0000 (0:00:00.414) 0:00:09.419 ***** 2026-01-07 00:53:49.287394 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287406 | orchestrator | 2026-01-07 00:53:49.287418 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-07 00:53:49.287430 | orchestrator | Wednesday 07 January 2026 00:51:37 +0000 (0:00:00.357) 0:00:09.777 ***** 2026-01-07 00:53:49.287442 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287454 | orchestrator | 2026-01-07 00:53:49.287466 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:53:49.287479 | orchestrator | Wednesday 07 January 2026 00:51:37 +0000 (0:00:00.510) 0:00:10.288 ***** 2026-01-07 00:53:49.287492 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:49.287506 | orchestrator | 2026-01-07 00:53:49.287519 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:53:49.287549 | orchestrator | Wednesday 07 January 2026 00:51:38 +0000 (0:00:00.860) 0:00:11.148 ***** 2026-01-07 00:53:49.287563 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:49.287575 | orchestrator | 2026-01-07 00:53:49.287588 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-07 00:53:49.287600 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:01.014) 0:00:12.162 ***** 2026-01-07 00:53:49.287611 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287622 | orchestrator | 2026-01-07 00:53:49.287633 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-07 00:53:49.287644 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:00.551) 0:00:12.714 ***** 2026-01-07 00:53:49.287655 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.287666 | orchestrator | 2026-01-07 00:53:49.287677 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-07 00:53:49.287688 | orchestrator | Wednesday 07 January 2026 00:51:40 +0000 (0:00:00.561) 0:00:13.275 ***** 2026-01-07 00:53:49.289377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289593 | orchestrator | 2026-01-07 00:53:49.289604 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-07 00:53:49.289614 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:01.193) 0:00:14.468 ***** 2026-01-07 00:53:49.289645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.289690 | orchestrator | 2026-01-07 00:53:49.289700 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-07 00:53:49.289709 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:04.224) 0:00:18.693 ***** 2026-01-07 00:53:49.289719 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:53:49.289730 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:53:49.289740 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:53:49.289750 | orchestrator | 2026-01-07 00:53:49.289760 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-07 00:53:49.289769 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:02.589) 0:00:21.282 ***** 2026-01-07 00:53:49.289779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:53:49.289790 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:53:49.289799 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:53:49.289809 | orchestrator | 2026-01-07 00:53:49.289819 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-07 00:53:49.289835 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:02.421) 0:00:23.704 ***** 2026-01-07 00:53:49.289845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:53:49.289855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:53:49.289864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:53:49.289874 | orchestrator | 2026-01-07 00:53:49.289884 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-07 00:53:49.289894 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:01.617) 0:00:25.321 ***** 2026-01-07 00:53:49.289903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:53:49.289913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:53:49.289923 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:53:49.289933 | orchestrator | 2026-01-07 00:53:49.289956 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-07 00:53:49.289971 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:02.322) 0:00:27.644 ***** 2026-01-07 00:53:49.289987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:53:49.290002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:53:49.290125 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:53:49.290150 | orchestrator | 2026-01-07 00:53:49.290166 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-07 00:53:49.290183 | orchestrator | Wednesday 07 January 2026 00:51:57 +0000 (0:00:02.364) 0:00:30.009 ***** 2026-01-07 00:53:49.290198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:53:49.290215 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:53:49.290265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:53:49.290285 | orchestrator | 2026-01-07 00:53:49.290301 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:53:49.290317 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:01.384) 0:00:31.394 ***** 2026-01-07 00:53:49.290332 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.290350 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:49.290373 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:49.290389 | orchestrator | 2026-01-07 00:53:49.290404 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-07 00:53:49.290419 | orchestrator | Wednesday 07 January 2026 00:51:59 +0000 (0:00:01.112) 0:00:32.506 ***** 2026-01-07 00:53:49.290436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.290467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.290497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:49.290514 | orchestrator | 2026-01-07 00:53:49.290528 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-07 00:53:49.290544 | orchestrator | Wednesday 07 January 2026 00:52:02 +0000 (0:00:02.450) 0:00:34.957 ***** 2026-01-07 00:53:49.290559 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:49.290574 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:49.290588 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:49.290602 | orchestrator | 2026-01-07 00:53:49.290618 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-07 00:53:49.290633 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:00.871) 0:00:35.828 ***** 2026-01-07 00:53:49.290655 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:49.290670 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:49.290683 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:49.290698 | orchestrator | 2026-01-07 00:53:49.290714 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-07 00:53:49.290730 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:06.933) 0:00:42.762 ***** 2026-01-07 00:53:49.290745 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:49.290761 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:49.290777 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:49.290793 | orchestrator | 2026-01-07 00:53:49.290809 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:53:49.290824 | orchestrator | 2026-01-07 00:53:49.290840 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:53:49.290856 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:00.352) 0:00:43.114 ***** 2026-01-07 00:53:49.290871 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:49.290888 | orchestrator | 2026-01-07 00:53:49.290902 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:53:49.290918 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:00.676) 0:00:43.790 ***** 2026-01-07 00:53:49.290933 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:49.290950 | orchestrator | 2026-01-07 00:53:49.290967 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:53:49.290978 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:00.234) 0:00:44.024 ***** 2026-01-07 00:53:49.290987 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:49.290997 | orchestrator | 2026-01-07 00:53:49.291006 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:53:49.291016 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:06.878) 0:00:50.903 ***** 2026-01-07 00:53:49.291026 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:49.291035 | orchestrator | 2026-01-07 00:53:49.291045 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:53:49.291054 | orchestrator | 2026-01-07 00:53:49.291064 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:53:49.291084 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:50.285) 0:01:41.189 ***** 2026-01-07 00:53:49.291093 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:49.291103 | orchestrator | 2026-01-07 00:53:49.291113 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:53:49.291122 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.620) 0:01:41.810 ***** 2026-01-07 00:53:49.291132 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:49.291141 | orchestrator | 2026-01-07 00:53:49.291151 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:53:49.291160 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.324) 0:01:42.135 ***** 2026-01-07 00:53:49.291170 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:49.291179 | orchestrator | 2026-01-07 00:53:49.291189 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:53:49.291198 | orchestrator | Wednesday 07 January 2026 00:53:16 +0000 (0:00:06.971) 0:01:49.106 ***** 2026-01-07 00:53:49.291208 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:49.291217 | orchestrator | 2026-01-07 00:53:49.291227 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:53:49.291236 | orchestrator | 2026-01-07 00:53:49.291329 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:53:49.291346 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:13.087) 0:02:02.194 ***** 2026-01-07 00:53:49.291364 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:49.291380 | orchestrator | 2026-01-07 00:53:49.291408 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:53:49.291422 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.548) 0:02:02.743 ***** 2026-01-07 00:53:49.291438 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:49.291466 | orchestrator | 2026-01-07 00:53:49.291481 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:53:49.291494 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.257) 0:02:03.001 ***** 2026-01-07 00:53:49.291505 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:49.291518 | orchestrator | 2026-01-07 00:53:49.291531 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:53:49.291543 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:01.931) 0:02:04.933 ***** 2026-01-07 00:53:49.291556 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:49.291569 | orchestrator | 2026-01-07 00:53:49.291584 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-07 00:53:49.291599 | orchestrator | 2026-01-07 00:53:49.291614 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-07 00:53:49.291627 | orchestrator | Wednesday 07 January 2026 00:53:45 +0000 (0:00:13.012) 0:02:17.946 ***** 2026-01-07 00:53:49.291640 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:49.291653 | orchestrator | 2026-01-07 00:53:49.291665 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-07 00:53:49.291678 | orchestrator | Wednesday 07 January 2026 00:53:45 +0000 (0:00:00.545) 0:02:18.491 ***** 2026-01-07 00:53:49.291691 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-07 00:53:49.291704 | orchestrator | enable_outward_rabbitmq_True 2026-01-07 00:53:49.291718 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-07 00:53:49.291731 | orchestrator | outward_rabbitmq_restart 2026-01-07 00:53:49.291744 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:49.291753 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:49.291761 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:49.291768 | orchestrator | 2026-01-07 00:53:49.291776 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-07 00:53:49.291784 | orchestrator | skipping: no hosts matched 2026-01-07 00:53:49.291792 | orchestrator | 2026-01-07 00:53:49.291800 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-07 00:53:49.291818 | orchestrator | skipping: no hosts matched 2026-01-07 00:53:49.291826 | orchestrator | 2026-01-07 00:53:49.291840 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-07 00:53:49.291848 | orchestrator | skipping: no hosts matched 2026-01-07 00:53:49.291856 | orchestrator | 2026-01-07 00:53:49.291864 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:49.291874 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 00:53:49.291883 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 00:53:49.291890 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:53:49.291898 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:53:49.291906 | orchestrator | 2026-01-07 00:53:49.291914 | orchestrator | 2026-01-07 00:53:49.291922 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:49.291930 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:02.699) 0:02:21.191 ***** 2026-01-07 00:53:49.291937 | orchestrator | =============================================================================== 2026-01-07 00:53:49.291945 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.39s 2026-01-07 00:53:49.291953 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.78s 2026-01-07 00:53:49.291961 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.93s 2026-01-07 00:53:49.291968 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.22s 2026-01-07 00:53:49.291976 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.41s 2026-01-07 00:53:49.291984 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.70s 2026-01-07 00:53:49.291992 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.59s 2026-01-07 00:53:49.291999 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.45s 2026-01-07 00:53:49.292007 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.42s 2026-01-07 00:53:49.292015 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.36s 2026-01-07 00:53:49.292023 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.32s 2026-01-07 00:53:49.292030 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.85s 2026-01-07 00:53:49.292038 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.67s 2026-01-07 00:53:49.292046 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.62s 2026-01-07 00:53:49.292054 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.38s 2026-01-07 00:53:49.292062 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.20s 2026-01-07 00:53:49.292070 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.11s 2026-01-07 00:53:49.292084 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2026-01-07 00:53:49.292092 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-01-07 00:53:49.292100 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.87s 2026-01-07 00:53:49.292108 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:49.292116 | orchestrator | 2026-01-07 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:52.338763 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:52.340993 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:52.343358 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:52.343430 | orchestrator | 2026-01-07 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:55.404000 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:55.407188 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:55.409727 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:55.409816 | orchestrator | 2026-01-07 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:58.460202 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:53:58.463337 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:53:58.465822 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:53:58.465870 | orchestrator | 2026-01-07 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:01.513347 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:01.515422 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:01.517595 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:01.517645 | orchestrator | 2026-01-07 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:04.563273 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:04.563373 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:04.565187 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:04.565281 | orchestrator | 2026-01-07 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:07.606741 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:07.608715 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:07.610806 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:07.610863 | orchestrator | 2026-01-07 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:10.649138 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:10.649647 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:10.650769 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:10.650948 | orchestrator | 2026-01-07 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:13.694524 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:13.695490 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:13.696720 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:13.696888 | orchestrator | 2026-01-07 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:16.734296 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:16.735025 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:16.736204 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:16.736264 | orchestrator | 2026-01-07 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:19.770363 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:19.772283 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:19.773252 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:19.773318 | orchestrator | 2026-01-07 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:22.819535 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:22.820357 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:22.821733 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:22.821773 | orchestrator | 2026-01-07 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:25.870774 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:25.873103 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:25.874074 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:25.874241 | orchestrator | 2026-01-07 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:28.908107 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:28.910273 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:28.912689 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:28.912749 | orchestrator | 2026-01-07 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:31.969171 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:31.971872 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:31.973632 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:31.973709 | orchestrator | 2026-01-07 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:35.023690 | orchestrator | 2026-01-07 00:54:35 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:35.026443 | orchestrator | 2026-01-07 00:54:35 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:35.029308 | orchestrator | 2026-01-07 00:54:35 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:35.029858 | orchestrator | 2026-01-07 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:38.071590 | orchestrator | 2026-01-07 00:54:38 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:38.074690 | orchestrator | 2026-01-07 00:54:38 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:38.077555 | orchestrator | 2026-01-07 00:54:38 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:38.077703 | orchestrator | 2026-01-07 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:41.121843 | orchestrator | 2026-01-07 00:54:41 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state STARTED 2026-01-07 00:54:41.124841 | orchestrator | 2026-01-07 00:54:41 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:41.126939 | orchestrator | 2026-01-07 00:54:41 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:41.127047 | orchestrator | 2026-01-07 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:44.167170 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task f4845b71-64fb-47e9-9132-f5f80aefa331 is in state SUCCESS 2026-01-07 00:54:44.170968 | orchestrator | 2026-01-07 00:54:44.171044 | orchestrator | 2026-01-07 00:54:44.171056 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:54:44.171067 | orchestrator | 2026-01-07 00:54:44.171078 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:54:44.171104 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.199) 0:00:00.199 ***** 2026-01-07 00:54:44.171116 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:44.171127 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:44.171256 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:44.171276 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.171379 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.171390 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.171397 | orchestrator | 2026-01-07 00:54:44.171404 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:54:44.171411 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.713) 0:00:00.913 ***** 2026-01-07 00:54:44.171417 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-07 00:54:44.171424 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-07 00:54:44.171430 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-07 00:54:44.171437 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-07 00:54:44.171443 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-07 00:54:44.171449 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-07 00:54:44.171456 | orchestrator | 2026-01-07 00:54:44.171462 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-07 00:54:44.171469 | orchestrator | 2026-01-07 00:54:44.171475 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-07 00:54:44.171481 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.958) 0:00:01.872 ***** 2026-01-07 00:54:44.172026 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:44.172045 | orchestrator | 2026-01-07 00:54:44.172052 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-07 00:54:44.172058 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:01.419) 0:00:03.291 ***** 2026-01-07 00:54:44.172071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172177 | orchestrator | 2026-01-07 00:54:44.172196 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-07 00:54:44.172202 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:01.940) 0:00:05.232 ***** 2026-01-07 00:54:44.172209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172256 | orchestrator | 2026-01-07 00:54:44.172263 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-07 00:54:44.172269 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:02.017) 0:00:07.249 ***** 2026-01-07 00:54:44.172275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172329 | orchestrator | 2026-01-07 00:54:44.172336 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-07 00:54:44.172342 | orchestrator | Wednesday 07 January 2026 00:52:29 +0000 (0:00:01.791) 0:00:09.040 ***** 2026-01-07 00:54:44.172351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172407 | orchestrator | 2026-01-07 00:54:44.172417 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-07 00:54:44.172424 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:01.830) 0:00:10.870 ***** 2026-01-07 00:54:44.172430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.172475 | orchestrator | 2026-01-07 00:54:44.172481 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-07 00:54:44.172487 | orchestrator | Wednesday 07 January 2026 00:52:32 +0000 (0:00:01.524) 0:00:12.395 ***** 2026-01-07 00:54:44.172494 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:44.172501 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:44.172507 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:44.172513 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.172519 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.172527 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.172536 | orchestrator | 2026-01-07 00:54:44.172546 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-07 00:54:44.172552 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:02.762) 0:00:15.157 ***** 2026-01-07 00:54:44.172560 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-07 00:54:44.172571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-07 00:54:44.172581 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-07 00:54:44.172590 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-07 00:54:44.172600 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-07 00:54:44.172622 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-07 00:54:44.172639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172673 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172684 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:54:44.172705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172717 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172740 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172757 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:54:44.172765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172772 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172791 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:54:44.172813 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172835 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172850 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:54:44.172857 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172872 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:54:44.172879 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:54:44.172906 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:54:44.172914 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:54:44.172921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-07 00:54:44.172929 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:54:44.172936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:54:44.172943 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:54:44.172951 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-07 00:54:44.172962 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:54:44.172970 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-07 00:54:44.172977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-07 00:54:44.172983 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-07 00:54:44.172989 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-07 00:54:44.172996 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:54:44.173002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:54:44.173008 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:54:44.173014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:54:44.173021 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:54:44.173027 | orchestrator | 2026-01-07 00:54:44.173033 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173039 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:22.476) 0:00:37.633 ***** 2026-01-07 00:54:44.173046 | orchestrator | 2026-01-07 00:54:44.173052 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173058 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:00.161) 0:00:37.795 ***** 2026-01-07 00:54:44.173064 | orchestrator | 2026-01-07 00:54:44.173073 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173080 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.145) 0:00:37.941 ***** 2026-01-07 00:54:44.173086 | orchestrator | 2026-01-07 00:54:44.173109 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173118 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.144) 0:00:38.085 ***** 2026-01-07 00:54:44.173124 | orchestrator | 2026-01-07 00:54:44.173130 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173137 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.085) 0:00:38.171 ***** 2026-01-07 00:54:44.173143 | orchestrator | 2026-01-07 00:54:44.173149 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:54:44.173159 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.084) 0:00:38.256 ***** 2026-01-07 00:54:44.173165 | orchestrator | 2026-01-07 00:54:44.173171 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-07 00:54:44.173177 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.156) 0:00:38.413 ***** 2026-01-07 00:54:44.173184 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:44.173190 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:44.173196 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:44.173202 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173208 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173214 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173220 | orchestrator | 2026-01-07 00:54:44.173227 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-07 00:54:44.173233 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:02.646) 0:00:41.059 ***** 2026-01-07 00:54:44.173239 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.173245 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:44.173251 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:44.173257 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.173264 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:44.173270 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.173276 | orchestrator | 2026-01-07 00:54:44.173282 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-07 00:54:44.173288 | orchestrator | 2026-01-07 00:54:44.173295 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:54:44.173301 | orchestrator | Wednesday 07 January 2026 00:53:27 +0000 (0:00:26.343) 0:01:07.403 ***** 2026-01-07 00:54:44.173307 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:44.173313 | orchestrator | 2026-01-07 00:54:44.173319 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:54:44.173325 | orchestrator | Wednesday 07 January 2026 00:53:28 +0000 (0:00:00.773) 0:01:08.176 ***** 2026-01-07 00:54:44.173331 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:44.173338 | orchestrator | 2026-01-07 00:54:44.173344 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-07 00:54:44.173350 | orchestrator | Wednesday 07 January 2026 00:53:28 +0000 (0:00:00.542) 0:01:08.719 ***** 2026-01-07 00:54:44.173356 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173362 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173368 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173374 | orchestrator | 2026-01-07 00:54:44.173380 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-07 00:54:44.173386 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:00.930) 0:01:09.650 ***** 2026-01-07 00:54:44.173393 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173399 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173405 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173415 | orchestrator | 2026-01-07 00:54:44.173422 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-07 00:54:44.173428 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.335) 0:01:09.985 ***** 2026-01-07 00:54:44.173434 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173440 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173446 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173452 | orchestrator | 2026-01-07 00:54:44.173458 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-07 00:54:44.173464 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.415) 0:01:10.401 ***** 2026-01-07 00:54:44.173471 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173477 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173483 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173493 | orchestrator | 2026-01-07 00:54:44.173499 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-07 00:54:44.173505 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.482) 0:01:10.883 ***** 2026-01-07 00:54:44.173511 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.173517 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.173523 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.173530 | orchestrator | 2026-01-07 00:54:44.173536 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-07 00:54:44.173542 | orchestrator | Wednesday 07 January 2026 00:53:31 +0000 (0:00:00.586) 0:01:11.470 ***** 2026-01-07 00:54:44.173548 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173554 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173560 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173566 | orchestrator | 2026-01-07 00:54:44.173573 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-07 00:54:44.173579 | orchestrator | Wednesday 07 January 2026 00:53:31 +0000 (0:00:00.354) 0:01:11.825 ***** 2026-01-07 00:54:44.173585 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173591 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173597 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173603 | orchestrator | 2026-01-07 00:54:44.173609 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-07 00:54:44.173616 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:00.412) 0:01:12.238 ***** 2026-01-07 00:54:44.173622 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173634 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173641 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173647 | orchestrator | 2026-01-07 00:54:44.173653 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-07 00:54:44.173659 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:00.336) 0:01:12.574 ***** 2026-01-07 00:54:44.173665 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173671 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173677 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173683 | orchestrator | 2026-01-07 00:54:44.173690 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-07 00:54:44.173696 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:00.573) 0:01:13.147 ***** 2026-01-07 00:54:44.173702 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173708 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173714 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173720 | orchestrator | 2026-01-07 00:54:44.173727 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-07 00:54:44.173733 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:00.327) 0:01:13.475 ***** 2026-01-07 00:54:44.173739 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173745 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173751 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173757 | orchestrator | 2026-01-07 00:54:44.173764 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-07 00:54:44.173770 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:00.294) 0:01:13.769 ***** 2026-01-07 00:54:44.173776 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173782 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173788 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173794 | orchestrator | 2026-01-07 00:54:44.173800 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-07 00:54:44.173806 | orchestrator | Wednesday 07 January 2026 00:53:34 +0000 (0:00:00.349) 0:01:14.119 ***** 2026-01-07 00:54:44.173813 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173819 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173825 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173831 | orchestrator | 2026-01-07 00:54:44.173841 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-07 00:54:44.173847 | orchestrator | Wednesday 07 January 2026 00:53:34 +0000 (0:00:00.575) 0:01:14.694 ***** 2026-01-07 00:54:44.173853 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173859 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173865 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173871 | orchestrator | 2026-01-07 00:54:44.173877 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-07 00:54:44.173884 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.327) 0:01:15.022 ***** 2026-01-07 00:54:44.173890 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173896 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173902 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173908 | orchestrator | 2026-01-07 00:54:44.173914 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-07 00:54:44.173920 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.352) 0:01:15.374 ***** 2026-01-07 00:54:44.173927 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173933 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173939 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173945 | orchestrator | 2026-01-07 00:54:44.173951 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-07 00:54:44.173957 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.353) 0:01:15.727 ***** 2026-01-07 00:54:44.173964 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.173970 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.173980 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.173986 | orchestrator | 2026-01-07 00:54:44.173992 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:54:44.173999 | orchestrator | Wednesday 07 January 2026 00:53:36 +0000 (0:00:00.309) 0:01:16.037 ***** 2026-01-07 00:54:44.174005 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:44.174011 | orchestrator | 2026-01-07 00:54:44.174064 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-07 00:54:44.174071 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.891) 0:01:16.928 ***** 2026-01-07 00:54:44.174077 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.174083 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.174165 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.174173 | orchestrator | 2026-01-07 00:54:44.174179 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-07 00:54:44.174185 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.460) 0:01:17.388 ***** 2026-01-07 00:54:44.174192 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.174198 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.174204 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.174210 | orchestrator | 2026-01-07 00:54:44.174216 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-07 00:54:44.174222 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.439) 0:01:17.828 ***** 2026-01-07 00:54:44.174228 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174235 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174241 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174247 | orchestrator | 2026-01-07 00:54:44.174253 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-07 00:54:44.174259 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.597) 0:01:18.425 ***** 2026-01-07 00:54:44.174265 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174271 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174277 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174284 | orchestrator | 2026-01-07 00:54:44.174290 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-07 00:54:44.174301 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.324) 0:01:18.750 ***** 2026-01-07 00:54:44.174308 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174314 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174320 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174326 | orchestrator | 2026-01-07 00:54:44.174332 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-07 00:54:44.174338 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:00.351) 0:01:19.101 ***** 2026-01-07 00:54:44.174345 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174351 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174357 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174363 | orchestrator | 2026-01-07 00:54:44.174369 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-07 00:54:44.174375 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:00.322) 0:01:19.424 ***** 2026-01-07 00:54:44.174381 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174388 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174394 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174400 | orchestrator | 2026-01-07 00:54:44.174406 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-07 00:54:44.174412 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.485) 0:01:19.910 ***** 2026-01-07 00:54:44.174418 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.174424 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.174430 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.174436 | orchestrator | 2026-01-07 00:54:44.174443 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:54:44.174449 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.324) 0:01:20.234 ***** 2026-01-07 00:54:44.174455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174584 | orchestrator | 2026-01-07 00:54:44.174591 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:54:44.174597 | orchestrator | Wednesday 07 January 2026 00:53:41 +0000 (0:00:01.326) 0:01:21.561 ***** 2026-01-07 00:54:44.174604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174673 | orchestrator | 2026-01-07 00:54:44.174680 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-07 00:54:44.174686 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:05.396) 0:01:26.957 ***** 2026-01-07 00:54:44.174692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.174718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.175690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.175721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.175728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.175734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.175741 | orchestrator | 2026-01-07 00:54:44.175747 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.175754 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:02.504) 0:01:29.462 ***** 2026-01-07 00:54:44.175760 | orchestrator | 2026-01-07 00:54:44.175767 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.175777 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.068) 0:01:29.531 ***** 2026-01-07 00:54:44.175783 | orchestrator | 2026-01-07 00:54:44.175790 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.175796 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.064) 0:01:29.595 ***** 2026-01-07 00:54:44.175802 | orchestrator | 2026-01-07 00:54:44.175808 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:54:44.175814 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.067) 0:01:29.663 ***** 2026-01-07 00:54:44.175821 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.175881 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.175888 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.175894 | orchestrator | 2026-01-07 00:54:44.175900 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:54:44.175906 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:02.669) 0:01:32.333 ***** 2026-01-07 00:54:44.175913 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.175919 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.175925 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.175931 | orchestrator | 2026-01-07 00:54:44.175937 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-07 00:54:44.175944 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:02.929) 0:01:35.263 ***** 2026-01-07 00:54:44.175950 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.175956 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.175962 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.175968 | orchestrator | 2026-01-07 00:54:44.175975 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:54:44.175981 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:07.596) 0:01:42.859 ***** 2026-01-07 00:54:44.175987 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.175993 | orchestrator | 2026-01-07 00:54:44.175999 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:54:44.176005 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.351) 0:01:43.211 ***** 2026-01-07 00:54:44.176017 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176024 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176030 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176036 | orchestrator | 2026-01-07 00:54:44.176042 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:54:44.176049 | orchestrator | Wednesday 07 January 2026 00:54:04 +0000 (0:00:00.970) 0:01:44.182 ***** 2026-01-07 00:54:44.176055 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.176061 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.176067 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.176073 | orchestrator | 2026-01-07 00:54:44.176079 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:54:44.176085 | orchestrator | Wednesday 07 January 2026 00:54:04 +0000 (0:00:00.614) 0:01:44.796 ***** 2026-01-07 00:54:44.176131 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176138 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176144 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176150 | orchestrator | 2026-01-07 00:54:44.176156 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:54:44.176163 | orchestrator | Wednesday 07 January 2026 00:54:05 +0000 (0:00:00.765) 0:01:45.562 ***** 2026-01-07 00:54:44.176169 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.176175 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.176181 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.176187 | orchestrator | 2026-01-07 00:54:44.176194 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:54:44.176200 | orchestrator | Wednesday 07 January 2026 00:54:06 +0000 (0:00:01.033) 0:01:46.595 ***** 2026-01-07 00:54:44.176206 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176212 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176224 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176231 | orchestrator | 2026-01-07 00:54:44.176237 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:54:44.176243 | orchestrator | Wednesday 07 January 2026 00:54:07 +0000 (0:00:00.908) 0:01:47.504 ***** 2026-01-07 00:54:44.176249 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176255 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176262 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176268 | orchestrator | 2026-01-07 00:54:44.176274 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-07 00:54:44.176280 | orchestrator | Wednesday 07 January 2026 00:54:08 +0000 (0:00:00.833) 0:01:48.337 ***** 2026-01-07 00:54:44.176286 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176293 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176300 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176307 | orchestrator | 2026-01-07 00:54:44.176314 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:54:44.176322 | orchestrator | Wednesday 07 January 2026 00:54:08 +0000 (0:00:00.273) 0:01:48.611 ***** 2026-01-07 00:54:44.176329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176349 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176361 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176392 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176412 | orchestrator | 2026-01-07 00:54:44.176419 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:54:44.176426 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:01.734) 0:01:50.346 ***** 2026-01-07 00:54:44.176434 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176441 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176449 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176486 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176507 | orchestrator | 2026-01-07 00:54:44.176514 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-07 00:54:44.176521 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:04.394) 0:01:54.740 ***** 2026-01-07 00:54:44.176533 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176549 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176578 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176601 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:54:44.176608 | orchestrator | 2026-01-07 00:54:44.176615 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.176622 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:03.241) 0:01:57.982 ***** 2026-01-07 00:54:44.176630 | orchestrator | 2026-01-07 00:54:44.176637 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.176644 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:00.139) 0:01:58.121 ***** 2026-01-07 00:54:44.176651 | orchestrator | 2026-01-07 00:54:44.176658 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:54:44.176665 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:00.128) 0:01:58.249 ***** 2026-01-07 00:54:44.176683 | orchestrator | 2026-01-07 00:54:44.176689 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:54:44.176696 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:00.079) 0:01:58.329 ***** 2026-01-07 00:54:44.176702 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.176708 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.176714 | orchestrator | 2026-01-07 00:54:44.176724 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:54:44.176730 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:06.600) 0:02:04.929 ***** 2026-01-07 00:54:44.176736 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.176742 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.176755 | orchestrator | 2026-01-07 00:54:44.176761 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-07 00:54:44.176768 | orchestrator | Wednesday 07 January 2026 00:54:31 +0000 (0:00:06.242) 0:02:11.172 ***** 2026-01-07 00:54:44.176774 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:44.176780 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:44.176786 | orchestrator | 2026-01-07 00:54:44.176792 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:54:44.176799 | orchestrator | Wednesday 07 January 2026 00:54:37 +0000 (0:00:06.497) 0:02:17.670 ***** 2026-01-07 00:54:44.176805 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:44.176811 | orchestrator | 2026-01-07 00:54:44.176817 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:54:44.176823 | orchestrator | Wednesday 07 January 2026 00:54:37 +0000 (0:00:00.148) 0:02:17.819 ***** 2026-01-07 00:54:44.176829 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176836 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176842 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176848 | orchestrator | 2026-01-07 00:54:44.176854 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:54:44.176860 | orchestrator | Wednesday 07 January 2026 00:54:38 +0000 (0:00:00.698) 0:02:18.517 ***** 2026-01-07 00:54:44.176866 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.176873 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.176879 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.176885 | orchestrator | 2026-01-07 00:54:44.176891 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:54:44.176897 | orchestrator | Wednesday 07 January 2026 00:54:39 +0000 (0:00:00.550) 0:02:19.067 ***** 2026-01-07 00:54:44.176904 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176910 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176916 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176922 | orchestrator | 2026-01-07 00:54:44.176928 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:54:44.176937 | orchestrator | Wednesday 07 January 2026 00:54:39 +0000 (0:00:00.762) 0:02:19.830 ***** 2026-01-07 00:54:44.176944 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:44.176950 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:44.176956 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:44.176962 | orchestrator | 2026-01-07 00:54:44.176968 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:54:44.176974 | orchestrator | Wednesday 07 January 2026 00:54:40 +0000 (0:00:00.660) 0:02:20.490 ***** 2026-01-07 00:54:44.176981 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.176987 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.176993 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.176999 | orchestrator | 2026-01-07 00:54:44.177005 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:54:44.177012 | orchestrator | Wednesday 07 January 2026 00:54:41 +0000 (0:00:00.769) 0:02:21.259 ***** 2026-01-07 00:54:44.177018 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:44.177024 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:44.177030 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:44.177036 | orchestrator | 2026-01-07 00:54:44.177043 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:54:44.177049 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 00:54:44.177056 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-07 00:54:44.177062 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-07 00:54:44.177072 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:54:44.177078 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:54:44.177084 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:54:44.177102 | orchestrator | 2026-01-07 00:54:44.177109 | orchestrator | 2026-01-07 00:54:44.177115 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:54:44.177121 | orchestrator | Wednesday 07 January 2026 00:54:42 +0000 (0:00:00.999) 0:02:22.259 ***** 2026-01-07 00:54:44.177127 | orchestrator | =============================================================================== 2026-01-07 00:54:44.177133 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.34s 2026-01-07 00:54:44.177139 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.48s 2026-01-07 00:54:44.177146 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.09s 2026-01-07 00:54:44.177152 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.27s 2026-01-07 00:54:44.177158 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.17s 2026-01-07 00:54:44.177164 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.40s 2026-01-07 00:54:44.177170 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.39s 2026-01-07 00:54:44.177180 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.24s 2026-01-07 00:54:44.177186 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.76s 2026-01-07 00:54:44.177192 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.65s 2026-01-07 00:54:44.177198 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.50s 2026-01-07 00:54:44.177204 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.02s 2026-01-07 00:54:44.177210 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.94s 2026-01-07 00:54:44.177217 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.83s 2026-01-07 00:54:44.177223 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.79s 2026-01-07 00:54:44.177229 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.73s 2026-01-07 00:54:44.177235 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.52s 2026-01-07 00:54:44.177241 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.42s 2026-01-07 00:54:44.177247 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.33s 2026-01-07 00:54:44.177253 | orchestrator | ovn-db : Configure OVN SB connection settings --------------------------- 1.03s 2026-01-07 00:54:44.177260 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:44.177266 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:44.177273 | orchestrator | 2026-01-07 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:47.239859 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:47.244179 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:47.244691 | orchestrator | 2026-01-07 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:50.288586 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:50.290911 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:50.290962 | orchestrator | 2026-01-07 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:53.344208 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:53.345878 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:53.345916 | orchestrator | 2026-01-07 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:56.390649 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:56.392386 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:56.392445 | orchestrator | 2026-01-07 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:59.446241 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:54:59.447773 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:54:59.447827 | orchestrator | 2026-01-07 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:02.495442 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:02.496771 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:02.496830 | orchestrator | 2026-01-07 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:05.539436 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:05.542119 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:05.542158 | orchestrator | 2026-01-07 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:08.584067 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:08.585275 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:08.585311 | orchestrator | 2026-01-07 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:11.623342 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:11.623414 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:11.623425 | orchestrator | 2026-01-07 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:14.660200 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:14.661960 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:14.662100 | orchestrator | 2026-01-07 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:17.716315 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:17.717329 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:17.717372 | orchestrator | 2026-01-07 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:20.757179 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:20.759783 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:20.760085 | orchestrator | 2026-01-07 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:23.798116 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:23.798558 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:23.799739 | orchestrator | 2026-01-07 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:26.843242 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:26.843337 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:26.843347 | orchestrator | 2026-01-07 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:29.885939 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:29.888925 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:29.889015 | orchestrator | 2026-01-07 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:32.940618 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:32.944537 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:32.944682 | orchestrator | 2026-01-07 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:35.992573 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:35.994166 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:35.994235 | orchestrator | 2026-01-07 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:39.057794 | orchestrator | 2026-01-07 00:55:39 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:39.059406 | orchestrator | 2026-01-07 00:55:39 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:39.059522 | orchestrator | 2026-01-07 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:42.095389 | orchestrator | 2026-01-07 00:55:42 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:42.096522 | orchestrator | 2026-01-07 00:55:42 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:42.096575 | orchestrator | 2026-01-07 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:45.142977 | orchestrator | 2026-01-07 00:55:45 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:45.143790 | orchestrator | 2026-01-07 00:55:45 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:45.145350 | orchestrator | 2026-01-07 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:48.194787 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:48.196170 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:48.196233 | orchestrator | 2026-01-07 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:51.256362 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:51.258085 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:51.258152 | orchestrator | 2026-01-07 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:54.307574 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:54.309251 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:54.309303 | orchestrator | 2026-01-07 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:57.362338 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:55:57.363822 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:55:57.363876 | orchestrator | 2026-01-07 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:00.404390 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:00.404498 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:00.404512 | orchestrator | 2026-01-07 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:03.452232 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:03.453368 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:03.453411 | orchestrator | 2026-01-07 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:06.505332 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:06.506993 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:06.507047 | orchestrator | 2026-01-07 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:09.547087 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:09.548981 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:09.549016 | orchestrator | 2026-01-07 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:12.606240 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:12.607912 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:12.608011 | orchestrator | 2026-01-07 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:15.650764 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:15.654210 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:15.654379 | orchestrator | 2026-01-07 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:18.711410 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:18.714782 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:18.714985 | orchestrator | 2026-01-07 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:21.765038 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:21.766544 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:21.766604 | orchestrator | 2026-01-07 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:24.822945 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:24.825502 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:24.825606 | orchestrator | 2026-01-07 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:27.863767 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:27.865192 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:27.865241 | orchestrator | 2026-01-07 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:30.910484 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:30.910594 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:30.910614 | orchestrator | 2026-01-07 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:33.967392 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:33.968669 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:33.968708 | orchestrator | 2026-01-07 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:37.008581 | orchestrator | 2026-01-07 00:56:37 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:37.011276 | orchestrator | 2026-01-07 00:56:37 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:37.011356 | orchestrator | 2026-01-07 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:40.053937 | orchestrator | 2026-01-07 00:56:40 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:40.062925 | orchestrator | 2026-01-07 00:56:40 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:40.063011 | orchestrator | 2026-01-07 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:43.110700 | orchestrator | 2026-01-07 00:56:43 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:43.112619 | orchestrator | 2026-01-07 00:56:43 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:43.112675 | orchestrator | 2026-01-07 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:46.159438 | orchestrator | 2026-01-07 00:56:46 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:46.162181 | orchestrator | 2026-01-07 00:56:46 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:46.162265 | orchestrator | 2026-01-07 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:49.212259 | orchestrator | 2026-01-07 00:56:49 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:49.214840 | orchestrator | 2026-01-07 00:56:49 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:49.214904 | orchestrator | 2026-01-07 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:52.262603 | orchestrator | 2026-01-07 00:56:52 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:52.263635 | orchestrator | 2026-01-07 00:56:52 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:52.263780 | orchestrator | 2026-01-07 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:55.302879 | orchestrator | 2026-01-07 00:56:55 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:55.305784 | orchestrator | 2026-01-07 00:56:55 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:55.305867 | orchestrator | 2026-01-07 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:58.359897 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:56:58.360724 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:56:58.363082 | orchestrator | 2026-01-07 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:01.412506 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:01.414448 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:01.414502 | orchestrator | 2026-01-07 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:04.452550 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:04.454183 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:04.454226 | orchestrator | 2026-01-07 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:07.519949 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:07.520777 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:07.520813 | orchestrator | 2026-01-07 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:10.556778 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:10.558054 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:10.558153 | orchestrator | 2026-01-07 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:13.593785 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:13.596007 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:13.596114 | orchestrator | 2026-01-07 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:16.648528 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:16.651525 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:16.651580 | orchestrator | 2026-01-07 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:19.705406 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:19.705470 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:19.705476 | orchestrator | 2026-01-07 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:22.744731 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:22.744829 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:22.744844 | orchestrator | 2026-01-07 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:25.799469 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:25.800594 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:25.800630 | orchestrator | 2026-01-07 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:28.848564 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:28.850567 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:28.850739 | orchestrator | 2026-01-07 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:31.897265 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:31.897930 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:31.897962 | orchestrator | 2026-01-07 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:34.942517 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:34.943034 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:34.943069 | orchestrator | 2026-01-07 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:37.993809 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:37.997123 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:37.997955 | orchestrator | 2026-01-07 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:41.046262 | orchestrator | 2026-01-07 00:57:41 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:41.047595 | orchestrator | 2026-01-07 00:57:41 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:41.047682 | orchestrator | 2026-01-07 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:44.097228 | orchestrator | 2026-01-07 00:57:44 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:44.098387 | orchestrator | 2026-01-07 00:57:44 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:44.099790 | orchestrator | 2026-01-07 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:47.144932 | orchestrator | 2026-01-07 00:57:47 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state STARTED 2026-01-07 00:57:47.147688 | orchestrator | 2026-01-07 00:57:47 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:47.148299 | orchestrator | 2026-01-07 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:50.191895 | orchestrator | 2026-01-07 00:57:50 | INFO  | Task ecfe1e61-0830-4948-bc77-2e4bc602bc7a is in state SUCCESS 2026-01-07 00:57:50.193755 | orchestrator | 2026-01-07 00:57:50.193825 | orchestrator | 2026-01-07 00:57:50.193836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:57:50.193844 | orchestrator | 2026-01-07 00:57:50.193851 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:57:50.193859 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.391) 0:00:00.391 ***** 2026-01-07 00:57:50.193865 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.193872 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.193878 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.193884 | orchestrator | 2026-01-07 00:57:50.193905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:57:50.193912 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.410) 0:00:00.801 ***** 2026-01-07 00:57:50.193920 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-07 00:57:50.193927 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-07 00:57:50.193934 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-07 00:57:50.193941 | orchestrator | 2026-01-07 00:57:50.193947 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-07 00:57:50.193953 | orchestrator | 2026-01-07 00:57:50.193960 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:57:50.193966 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:00.615) 0:00:01.417 ***** 2026-01-07 00:57:50.193973 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.194321 | orchestrator | 2026-01-07 00:57:50.194344 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-07 00:57:50.194351 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.811) 0:00:02.228 ***** 2026-01-07 00:57:50.194358 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.194365 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.194372 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.194379 | orchestrator | 2026-01-07 00:57:50.194385 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 00:57:50.194391 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.594) 0:00:02.822 ***** 2026-01-07 00:57:50.194398 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.194403 | orchestrator | 2026-01-07 00:57:50.194445 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-07 00:57:50.194453 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:00.861) 0:00:03.684 ***** 2026-01-07 00:57:50.194488 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.194496 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.194503 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.194509 | orchestrator | 2026-01-07 00:57:50.194583 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-07 00:57:50.194593 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.831) 0:00:04.515 ***** 2026-01-07 00:57:50.194616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.194624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.194631 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.194638 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:57:50.194646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.195519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.195543 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:57:50.195550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:57:50.195557 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:57:50.195565 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:57:50.195571 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:57:50.195578 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:57:50.195584 | orchestrator | 2026-01-07 00:57:50.195591 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:57:50.195597 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:04.572) 0:00:09.087 ***** 2026-01-07 00:57:50.195652 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:57:50.195659 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:57:50.195665 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:57:50.195671 | orchestrator | 2026-01-07 00:57:50.195677 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:57:50.195684 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:00.984) 0:00:10.072 ***** 2026-01-07 00:57:50.195690 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:57:50.195697 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:57:50.195703 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:57:50.195709 | orchestrator | 2026-01-07 00:57:50.195715 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:57:50.195721 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:01.588) 0:00:11.660 ***** 2026-01-07 00:57:50.195728 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-07 00:57:50.195734 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.195778 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-07 00:57:50.195785 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.195792 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-07 00:57:50.195798 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.195804 | orchestrator | 2026-01-07 00:57:50.195810 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-07 00:57:50.195816 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:01.236) 0:00:12.897 ***** 2026-01-07 00:57:50.195832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.195906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.195914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.195920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.195932 | orchestrator | 2026-01-07 00:57:50.195938 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-07 00:57:50.196313 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:03.247) 0:00:16.145 ***** 2026-01-07 00:57:50.196324 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.196331 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.196336 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.196342 | orchestrator | 2026-01-07 00:57:50.196349 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-07 00:57:50.196355 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:01.568) 0:00:17.713 ***** 2026-01-07 00:57:50.196362 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-07 00:57:50.196369 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-07 00:57:50.196376 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-07 00:57:50.196383 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-07 00:57:50.196389 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-07 00:57:50.196396 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-07 00:57:50.196403 | orchestrator | 2026-01-07 00:57:50.196409 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-07 00:57:50.196415 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:03.249) 0:00:20.963 ***** 2026-01-07 00:57:50.196419 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.196422 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.196426 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.196430 | orchestrator | 2026-01-07 00:57:50.196434 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-07 00:57:50.196437 | orchestrator | Wednesday 07 January 2026 00:51:29 +0000 (0:00:02.310) 0:00:23.274 ***** 2026-01-07 00:57:50.196441 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.196445 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.196449 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.196453 | orchestrator | 2026-01-07 00:57:50.196456 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-07 00:57:50.196460 | orchestrator | Wednesday 07 January 2026 00:51:33 +0000 (0:00:03.717) 0:00:26.992 ***** 2026-01-07 00:57:50.196464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.196492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.196501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.196519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.196527 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.196535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.196542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.196549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.196629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.196972 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.197001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.197016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.197021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.197025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.197029 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.197033 | orchestrator | 2026-01-07 00:57:50.197037 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-07 00:57:50.197041 | orchestrator | Wednesday 07 January 2026 00:51:34 +0000 (0:00:01.136) 0:00:28.128 ***** 2026-01-07 00:57:50.197045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.197083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.197087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.197095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.197117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.197126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14', '__omit_place_holder__290691ea725ae9dfc66f2a79afaf0d782cb2ae14'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:57:50.197130 | orchestrator | 2026-01-07 00:57:50.197134 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-07 00:57:50.197138 | orchestrator | Wednesday 07 January 2026 00:51:37 +0000 (0:00:03.177) 0:00:31.306 ***** 2026-01-07 00:57:50.197142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.197203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.197209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.197215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.197221 | orchestrator | 2026-01-07 00:57:50.197227 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-07 00:57:50.197648 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:03.716) 0:00:35.023 ***** 2026-01-07 00:57:50.197672 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:57:50.197679 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:57:50.197684 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:57:50.197687 | orchestrator | 2026-01-07 00:57:50.197691 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-07 00:57:50.197695 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:04.099) 0:00:39.122 ***** 2026-01-07 00:57:50.197699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:57:50.197703 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:57:50.197707 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:57:50.197711 | orchestrator | 2026-01-07 00:57:50.197736 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-07 00:57:50.197741 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:05.626) 0:00:44.748 ***** 2026-01-07 00:57:50.197745 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.197749 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.197752 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.197756 | orchestrator | 2026-01-07 00:57:50.197760 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-07 00:57:50.197767 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.504) 0:00:45.252 ***** 2026-01-07 00:57:50.197771 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:57:50.197776 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:57:50.197780 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:57:50.197784 | orchestrator | 2026-01-07 00:57:50.197788 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-07 00:57:50.197792 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:02.923) 0:00:48.176 ***** 2026-01-07 00:57:50.197795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:57:50.197799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:57:50.197805 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:57:50.197811 | orchestrator | 2026-01-07 00:57:50.197818 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-07 00:57:50.197827 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:04.111) 0:00:52.288 ***** 2026-01-07 00:57:50.197834 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-07 00:57:50.197840 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-07 00:57:50.197847 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-07 00:57:50.197853 | orchestrator | 2026-01-07 00:57:50.197860 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-07 00:57:50.197865 | orchestrator | Wednesday 07 January 2026 00:52:01 +0000 (0:00:02.527) 0:00:54.815 ***** 2026-01-07 00:57:50.197871 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-07 00:57:50.197877 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-07 00:57:50.197884 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-07 00:57:50.197898 | orchestrator | 2026-01-07 00:57:50.198144 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:57:50.198152 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:01.747) 0:00:56.562 ***** 2026-01-07 00:57:50.198158 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.198164 | orchestrator | 2026-01-07 00:57:50.198170 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-07 00:57:50.198176 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:00.843) 0:00:57.406 ***** 2026-01-07 00:57:50.198184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.198393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.198399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.198409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.198417 | orchestrator | 2026-01-07 00:57:50.198423 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-07 00:57:50.198428 | orchestrator | Wednesday 07 January 2026 00:52:07 +0000 (0:00:03.339) 0:01:00.746 ***** 2026-01-07 00:57:50.198677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.198701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.198708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.198722 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.198729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.198736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.198743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.198869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.198878 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.198968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199339 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.199352 | orchestrator | 2026-01-07 00:57:50.199356 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-07 00:57:50.199360 | orchestrator | Wednesday 07 January 2026 00:52:08 +0000 (0:00:01.403) 0:01:02.150 ***** 2026-01-07 00:57:50.199364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.199368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199376 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.199380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.199707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199744 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.199760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.199767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199779 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.199785 | orchestrator | 2026-01-07 00:57:50.199791 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 00:57:50.199798 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.842) 0:01:02.993 ***** 2026-01-07 00:57:50.199805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.199873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199951 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.199963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.199969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.199975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.199981 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.199987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200092 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.200098 | orchestrator | 2026-01-07 00:57:50.200103 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 00:57:50.200107 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:01.139) 0:01:04.132 ***** 2026-01-07 00:57:50.200123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200135 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.200139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200157 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.200413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200507 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.200514 | orchestrator | 2026-01-07 00:57:50.200520 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 00:57:50.200526 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:00.722) 0:01:04.855 ***** 2026-01-07 00:57:50.200533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200553 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.200649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200677 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.200681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200693 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.200696 | orchestrator | 2026-01-07 00:57:50.200700 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-07 00:57:50.200704 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:01.036) 0:01:05.892 ***** 2026-01-07 00:57:50.200708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200849 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.200853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200866 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.200870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.200941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.200951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.200955 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.200959 | orchestrator | 2026-01-07 00:57:50.200963 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-07 00:57:50.200967 | orchestrator | Wednesday 07 January 2026 00:52:13 +0000 (0:00:01.024) 0:01:06.916 ***** 2026-01-07 00:57:50.200971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.201474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.201495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.201500 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.201504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.201514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.202254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.202313 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.202351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.202359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.202364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.202369 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.202373 | orchestrator | 2026-01-07 00:57:50.202378 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-07 00:57:50.202384 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:00.608) 0:01:07.525 ***** 2026-01-07 00:57:50.202390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.202414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.202423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.202430 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.202452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.202459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.202465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.202471 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.202477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:57:50.202489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:57:50.202496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:57:50.202502 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.202508 | orchestrator | 2026-01-07 00:57:50.202514 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-07 00:57:50.202521 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:00.790) 0:01:08.316 ***** 2026-01-07 00:57:50.202526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:57:50.202531 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:57:50.202541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:57:50.202545 | orchestrator | 2026-01-07 00:57:50.202549 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-07 00:57:50.202553 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:01.947) 0:01:10.263 ***** 2026-01-07 00:57:50.202557 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:57:50.202561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:57:50.202566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:57:50.202569 | orchestrator | 2026-01-07 00:57:50.202573 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-07 00:57:50.202578 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:01.385) 0:01:11.649 ***** 2026-01-07 00:57:50.202582 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:57:50.202586 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:57:50.202590 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:57:50.202594 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:57:50.202633 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.202638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:57:50.202642 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.202646 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:57:50.202650 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.202654 | orchestrator | 2026-01-07 00:57:50.202663 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-07 00:57:50.202667 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:01.191) 0:01:12.840 ***** 2026-01-07 00:57:50.202672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:57:50.202761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.202765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.202769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:57:50.202773 | orchestrator | 2026-01-07 00:57:50.202777 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-07 00:57:50.202781 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:02.566) 0:01:15.407 ***** 2026-01-07 00:57:50.202785 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.202789 | orchestrator | 2026-01-07 00:57:50.202793 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-07 00:57:50.202796 | orchestrator | Wednesday 07 January 2026 00:52:22 +0000 (0:00:00.737) 0:01:16.145 ***** 2026-01-07 00:57:50.202802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:57:50.202824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.202829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:57:50.202854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:57:50.202860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.202884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.202891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202922 | orchestrator | 2026-01-07 00:57:50.202928 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-07 00:57:50.202934 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:04.401) 0:01:20.547 ***** 2026-01-07 00:57:50.202941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:57:50.202953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.202965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.202983 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.202990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:57:50.202997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.203004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203016 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:57:50.203046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.203052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203067 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203074 | orchestrator | 2026-01-07 00:57:50.203079 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-07 00:57:50.203083 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:01.454) 0:01:22.001 ***** 2026-01-07 00:57:50.203088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:57:50.203128 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203132 | orchestrator | 2026-01-07 00:57:50.203140 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-07 00:57:50.203144 | orchestrator | Wednesday 07 January 2026 00:52:29 +0000 (0:00:01.073) 0:01:23.074 ***** 2026-01-07 00:57:50.203148 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.203152 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.203156 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.203160 | orchestrator | 2026-01-07 00:57:50.203164 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-07 00:57:50.203168 | orchestrator | Wednesday 07 January 2026 00:52:31 +0000 (0:00:01.648) 0:01:24.722 ***** 2026-01-07 00:57:50.203175 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.203179 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.203183 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.203187 | orchestrator | 2026-01-07 00:57:50.203191 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-07 00:57:50.203195 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:02.248) 0:01:26.970 ***** 2026-01-07 00:57:50.203199 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.203203 | orchestrator | 2026-01-07 00:57:50.203206 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-07 00:57:50.203210 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:00.826) 0:01:27.796 ***** 2026-01-07 00:57:50.203215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.203220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.203247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.203262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203278 | orchestrator | 2026-01-07 00:57:50.203284 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-07 00:57:50.203290 | orchestrator | Wednesday 07 January 2026 00:52:40 +0000 (0:00:06.073) 0:01:33.869 ***** 2026-01-07 00:57:50.203301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.203311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203324 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.203336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203354 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.203376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203389 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203395 | orchestrator | 2026-01-07 00:57:50.203401 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-07 00:57:50.203407 | orchestrator | Wednesday 07 January 2026 00:52:41 +0000 (0:00:00.612) 0:01:34.482 ***** 2026-01-07 00:57:50.203413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203440 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203453 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:57:50.203471 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203477 | orchestrator | 2026-01-07 00:57:50.203483 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-07 00:57:50.203488 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:01.275) 0:01:35.758 ***** 2026-01-07 00:57:50.203495 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.203501 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.203507 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.203513 | orchestrator | 2026-01-07 00:57:50.203519 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-07 00:57:50.203587 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:01.396) 0:01:37.154 ***** 2026-01-07 00:57:50.203592 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.203596 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.203617 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.203623 | orchestrator | 2026-01-07 00:57:50.203633 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-07 00:57:50.203638 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:02.298) 0:01:39.453 ***** 2026-01-07 00:57:50.203641 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203645 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203649 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203653 | orchestrator | 2026-01-07 00:57:50.203657 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-07 00:57:50.203667 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:00.302) 0:01:39.755 ***** 2026-01-07 00:57:50.203672 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.203676 | orchestrator | 2026-01-07 00:57:50.203680 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-07 00:57:50.203683 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:00.858) 0:01:40.613 ***** 2026-01-07 00:57:50.203688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:57:50.203699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:57:50.203703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:57:50.203707 | orchestrator | 2026-01-07 00:57:50.203712 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-07 00:57:50.203716 | orchestrator | Wednesday 07 January 2026 00:52:50 +0000 (0:00:03.348) 0:01:43.962 ***** 2026-01-07 00:57:50.203723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:57:50.203727 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:57:50.203739 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:57:50.203751 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203755 | orchestrator | 2026-01-07 00:57:50.203759 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-07 00:57:50.203763 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:01.831) 0:01:45.794 ***** 2026-01-07 00:57:50.203769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203784 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203803 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:57:50.203830 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203836 | orchestrator | 2026-01-07 00:57:50.203842 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-07 00:57:50.203848 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:02.034) 0:01:47.829 ***** 2026-01-07 00:57:50.203860 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203866 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203872 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203878 | orchestrator | 2026-01-07 00:57:50.203884 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-07 00:57:50.203890 | orchestrator | Wednesday 07 January 2026 00:52:55 +0000 (0:00:00.779) 0:01:48.609 ***** 2026-01-07 00:57:50.203896 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.203902 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.203908 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.203915 | orchestrator | 2026-01-07 00:57:50.203921 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-07 00:57:50.203927 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:01.494) 0:01:50.103 ***** 2026-01-07 00:57:50.203934 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.203940 | orchestrator | 2026-01-07 00:57:50.203947 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-07 00:57:50.203954 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:00.780) 0:01:50.883 ***** 2026-01-07 00:57:50.203960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.203967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.203987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.204015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.204056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204076 | orchestrator | 2026-01-07 00:57:50.204083 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-07 00:57:50.204089 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:07.647) 0:01:58.531 ***** 2026-01-07 00:57:50.204093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.204097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204124 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.204128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.204132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204153 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.204167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.204173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.204192 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.204198 | orchestrator | 2026-01-07 00:57:50.204203 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-07 00:57:50.204209 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:01.768) 0:02:00.299 ***** 2026-01-07 00:57:50.204216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204238 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.204244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204256 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.204262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:57:50.204280 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.204286 | orchestrator | 2026-01-07 00:57:50.204293 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-07 00:57:50.204303 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:01.896) 0:02:02.195 ***** 2026-01-07 00:57:50.204309 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.204315 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.204322 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.204329 | orchestrator | 2026-01-07 00:57:50.204335 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-07 00:57:50.204341 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:01.533) 0:02:03.729 ***** 2026-01-07 00:57:50.204347 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.204354 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.204359 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.204365 | orchestrator | 2026-01-07 00:57:50.204371 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-07 00:57:50.204375 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:01.862) 0:02:05.591 ***** 2026-01-07 00:57:50.204378 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.204382 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.204386 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.204390 | orchestrator | 2026-01-07 00:57:50.204394 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-07 00:57:50.204399 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:00.404) 0:02:05.995 ***** 2026-01-07 00:57:50.204405 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.204411 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.204417 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.204422 | orchestrator | 2026-01-07 00:57:50.204429 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-07 00:57:50.204436 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:00.248) 0:02:06.244 ***** 2026-01-07 00:57:50.204442 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.204448 | orchestrator | 2026-01-07 00:57:50.204454 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-07 00:57:50.204461 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:00.721) 0:02:06.966 ***** 2026-01-07 00:57:50.204468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:57:50.204481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:57:50.204493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.204505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.205358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:57:50.205457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.205463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205490 | orchestrator | 2026-01-07 00:57:50.205494 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-07 00:57:50.205498 | orchestrator | Wednesday 07 January 2026 00:53:17 +0000 (0:00:04.196) 0:02:11.162 ***** 2026-01-07 00:57:50.205502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:57:50.205506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.205514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:57:50.205570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.205574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205650 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.205656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205668 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.205674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:57:50.205682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:57:50.205690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.205709 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.205713 | orchestrator | 2026-01-07 00:57:50.205717 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-07 00:57:50.205721 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:01.383) 0:02:12.546 ***** 2026-01-07 00:57:50.205728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205740 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.205746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205754 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.205758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:57:50.205766 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.205769 | orchestrator | 2026-01-07 00:57:50.205773 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-07 00:57:50.205777 | orchestrator | Wednesday 07 January 2026 00:53:20 +0000 (0:00:00.955) 0:02:13.501 ***** 2026-01-07 00:57:50.205781 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.205785 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.205789 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.205792 | orchestrator | 2026-01-07 00:57:50.205796 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-07 00:57:50.205800 | orchestrator | Wednesday 07 January 2026 00:53:21 +0000 (0:00:01.838) 0:02:15.340 ***** 2026-01-07 00:57:50.205804 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.205810 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.205816 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.205822 | orchestrator | 2026-01-07 00:57:50.205827 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-07 00:57:50.205832 | orchestrator | Wednesday 07 January 2026 00:53:23 +0000 (0:00:01.986) 0:02:17.327 ***** 2026-01-07 00:57:50.205838 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.205847 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.205854 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.205860 | orchestrator | 2026-01-07 00:57:50.205869 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-07 00:57:50.205874 | orchestrator | Wednesday 07 January 2026 00:53:24 +0000 (0:00:00.526) 0:02:17.854 ***** 2026-01-07 00:57:50.205879 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.205886 | orchestrator | 2026-01-07 00:57:50.205892 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-07 00:57:50.205897 | orchestrator | Wednesday 07 January 2026 00:53:25 +0000 (0:00:00.805) 0:02:18.660 ***** 2026-01-07 00:57:50.205909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:57:50.205927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.205934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:57:50.205954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.205961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:57:50.205974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.205984 | orchestrator | 2026-01-07 00:57:50.205990 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-07 00:57:50.205996 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:04.419) 0:02:23.079 ***** 2026-01-07 00:57:50.206002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:57:50.206052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.206070 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:57:50.206087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.206100 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:57:50.206121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.206131 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206135 | orchestrator | 2026-01-07 00:57:50.206140 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-07 00:57:50.206145 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:03.756) 0:02:26.836 ***** 2026-01-07 00:57:50.206154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206178 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206198 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:57:50.206222 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206228 | orchestrator | 2026-01-07 00:57:50.206234 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-07 00:57:50.206241 | orchestrator | Wednesday 07 January 2026 00:53:36 +0000 (0:00:03.388) 0:02:30.224 ***** 2026-01-07 00:57:50.206247 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.206252 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.206258 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.206265 | orchestrator | 2026-01-07 00:57:50.206271 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-07 00:57:50.206277 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:01.238) 0:02:31.462 ***** 2026-01-07 00:57:50.206283 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.206289 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.206296 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.206302 | orchestrator | 2026-01-07 00:57:50.206308 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-07 00:57:50.206314 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:02.018) 0:02:33.481 ***** 2026-01-07 00:57:50.206320 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206326 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206332 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206338 | orchestrator | 2026-01-07 00:57:50.206345 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-07 00:57:50.206349 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.526) 0:02:34.007 ***** 2026-01-07 00:57:50.206356 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.206359 | orchestrator | 2026-01-07 00:57:50.206363 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-07 00:57:50.206367 | orchestrator | Wednesday 07 January 2026 00:53:41 +0000 (0:00:00.893) 0:02:34.901 ***** 2026-01-07 00:57:50.206377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:57:50.206383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:57:50.206388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:57:50.206399 | orchestrator | 2026-01-07 00:57:50.206406 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-07 00:57:50.206412 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:04.571) 0:02:39.472 ***** 2026-01-07 00:57:50.206419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:57:50.206425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:57:50.206432 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206438 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:57:50.206455 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206461 | orchestrator | 2026-01-07 00:57:50.206479 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-07 00:57:50.206485 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:00.698) 0:02:40.171 ***** 2026-01-07 00:57:50.206492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206517 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206547 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:57:50.206559 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206562 | orchestrator | 2026-01-07 00:57:50.206566 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-07 00:57:50.206570 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:00.670) 0:02:40.841 ***** 2026-01-07 00:57:50.206574 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.206578 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.206582 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.206585 | orchestrator | 2026-01-07 00:57:50.206589 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-07 00:57:50.206593 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:01.417) 0:02:42.259 ***** 2026-01-07 00:57:50.206597 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.206647 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.206651 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.206654 | orchestrator | 2026-01-07 00:57:50.206658 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-07 00:57:50.206662 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:02.125) 0:02:44.385 ***** 2026-01-07 00:57:50.206666 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206669 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206673 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206677 | orchestrator | 2026-01-07 00:57:50.206681 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-07 00:57:50.206685 | orchestrator | Wednesday 07 January 2026 00:53:51 +0000 (0:00:00.531) 0:02:44.916 ***** 2026-01-07 00:57:50.206688 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.206692 | orchestrator | 2026-01-07 00:57:50.206696 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-07 00:57:50.206700 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:00.986) 0:02:45.903 ***** 2026-01-07 00:57:50.206714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:57:50.206724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:57:50.206740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:57:50.206751 | orchestrator | 2026-01-07 00:57:50.206757 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-07 00:57:50.206763 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:04.347) 0:02:50.250 ***** 2026-01-07 00:57:50.206769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:57:50.206776 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:57:50.206807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:57:50.206820 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206824 | orchestrator | 2026-01-07 00:57:50.206830 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-07 00:57:50.206838 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:01.220) 0:02:51.471 ***** 2026-01-07 00:57:50.206846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:57:50.206872 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.206878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:57:50.206911 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.206917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:57:50.206948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:57:50.206952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:57:50.206956 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.206960 | orchestrator | 2026-01-07 00:57:50.206964 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-07 00:57:50.206968 | orchestrator | Wednesday 07 January 2026 00:53:59 +0000 (0:00:01.009) 0:02:52.480 ***** 2026-01-07 00:57:50.206973 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.206979 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.206985 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.206992 | orchestrator | 2026-01-07 00:57:50.206998 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-07 00:57:50.207005 | orchestrator | Wednesday 07 January 2026 00:54:00 +0000 (0:00:01.380) 0:02:53.861 ***** 2026-01-07 00:57:50.207012 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.207018 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.207024 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.207031 | orchestrator | 2026-01-07 00:57:50.207037 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-07 00:57:50.207043 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:02.095) 0:02:55.957 ***** 2026-01-07 00:57:50.207049 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207055 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207059 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207062 | orchestrator | 2026-01-07 00:57:50.207066 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-07 00:57:50.207071 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:00.320) 0:02:56.278 ***** 2026-01-07 00:57:50.207077 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207083 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207092 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207099 | orchestrator | 2026-01-07 00:57:50.207108 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-07 00:57:50.207113 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.558) 0:02:56.836 ***** 2026-01-07 00:57:50.207118 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.207124 | orchestrator | 2026-01-07 00:57:50.207129 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-07 00:57:50.207134 | orchestrator | Wednesday 07 January 2026 00:54:04 +0000 (0:00:00.961) 0:02:57.797 ***** 2026-01-07 00:57:50.207142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:57:50.207155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:57:50.207185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:57:50.207228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207245 | orchestrator | 2026-01-07 00:57:50.207252 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-07 00:57:50.207258 | orchestrator | Wednesday 07 January 2026 00:54:08 +0000 (0:00:03.992) 0:03:01.790 ***** 2026-01-07 00:57:50.207265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:57:50.207271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207290 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:57:50.207312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207325 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:57:50.207343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:57:50.207349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:57:50.207355 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207360 | orchestrator | 2026-01-07 00:57:50.207366 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-07 00:57:50.207373 | orchestrator | Wednesday 07 January 2026 00:54:09 +0000 (0:00:00.951) 0:03:02.742 ***** 2026-01-07 00:57:50.207379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207422 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:57:50.207442 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207446 | orchestrator | 2026-01-07 00:57:50.207449 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-07 00:57:50.207453 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:00.855) 0:03:03.597 ***** 2026-01-07 00:57:50.207464 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.207468 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.207472 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.207478 | orchestrator | 2026-01-07 00:57:50.207486 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-07 00:57:50.207495 | orchestrator | Wednesday 07 January 2026 00:54:11 +0000 (0:00:01.568) 0:03:05.166 ***** 2026-01-07 00:57:50.207502 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.207508 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.207514 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.207519 | orchestrator | 2026-01-07 00:57:50.207526 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-07 00:57:50.207531 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:02.254) 0:03:07.421 ***** 2026-01-07 00:57:50.207536 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207542 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207548 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207554 | orchestrator | 2026-01-07 00:57:50.207560 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-07 00:57:50.207566 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:00.563) 0:03:07.984 ***** 2026-01-07 00:57:50.207573 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.207580 | orchestrator | 2026-01-07 00:57:50.207586 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-07 00:57:50.207592 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:01.101) 0:03:09.085 ***** 2026-01-07 00:57:50.207654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:57:50.207671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:57:50.207699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:57:50.207708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207712 | orchestrator | 2026-01-07 00:57:50.207716 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-07 00:57:50.207720 | orchestrator | Wednesday 07 January 2026 00:54:19 +0000 (0:00:04.216) 0:03:13.302 ***** 2026-01-07 00:57:50.207726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:57:50.207734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207742 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:57:50.207754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207760 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:57:50.207774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.207781 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207789 | orchestrator | 2026-01-07 00:57:50.207796 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-07 00:57:50.207800 | orchestrator | Wednesday 07 January 2026 00:54:20 +0000 (0:00:00.904) 0:03:14.207 ***** 2026-01-07 00:57:50.207805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207814 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.207818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207827 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.207873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:57:50.207895 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.207904 | orchestrator | 2026-01-07 00:57:50.207914 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-07 00:57:50.207920 | orchestrator | Wednesday 07 January 2026 00:54:21 +0000 (0:00:00.952) 0:03:15.159 ***** 2026-01-07 00:57:50.207926 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.207931 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.207937 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.207942 | orchestrator | 2026-01-07 00:57:50.207948 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-07 00:57:50.207954 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:01.268) 0:03:16.427 ***** 2026-01-07 00:57:50.207960 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.207966 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.207972 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.207979 | orchestrator | 2026-01-07 00:57:50.207983 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-07 00:57:50.207987 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:02.064) 0:03:18.492 ***** 2026-01-07 00:57:50.207991 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.207995 | orchestrator | 2026-01-07 00:57:50.207998 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-07 00:57:50.208002 | orchestrator | Wednesday 07 January 2026 00:54:26 +0000 (0:00:01.331) 0:03:19.824 ***** 2026-01-07 00:57:50.208007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:57:50.208020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:57:50.208045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:57:50.208072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208084 | orchestrator | 2026-01-07 00:57:50.208088 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-07 00:57:50.208092 | orchestrator | Wednesday 07 January 2026 00:54:30 +0000 (0:00:03.604) 0:03:23.428 ***** 2026-01-07 00:57:50.208096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:57:50.208106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208121 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.208125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:57:50.208130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208161 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.208348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:57:50.208369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.208398 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.208402 | orchestrator | 2026-01-07 00:57:50.208409 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-07 00:57:50.208418 | orchestrator | Wednesday 07 January 2026 00:54:30 +0000 (0:00:00.647) 0:03:24.076 ***** 2026-01-07 00:57:50.208427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208440 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.208446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208458 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.208464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:57:50.208483 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.208489 | orchestrator | 2026-01-07 00:57:50.208496 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-07 00:57:50.208500 | orchestrator | Wednesday 07 January 2026 00:54:32 +0000 (0:00:01.401) 0:03:25.477 ***** 2026-01-07 00:57:50.208550 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.208558 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.208562 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.208566 | orchestrator | 2026-01-07 00:57:50.208570 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-07 00:57:50.208576 | orchestrator | Wednesday 07 January 2026 00:54:33 +0000 (0:00:01.412) 0:03:26.890 ***** 2026-01-07 00:57:50.208582 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.208589 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.208596 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.208620 | orchestrator | 2026-01-07 00:57:50.208626 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-07 00:57:50.208632 | orchestrator | Wednesday 07 January 2026 00:54:35 +0000 (0:00:02.221) 0:03:29.112 ***** 2026-01-07 00:57:50.208638 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.208645 | orchestrator | 2026-01-07 00:57:50.208651 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-07 00:57:50.208658 | orchestrator | Wednesday 07 January 2026 00:54:37 +0000 (0:00:01.312) 0:03:30.424 ***** 2026-01-07 00:57:50.208665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:57:50.208672 | orchestrator | 2026-01-07 00:57:50.208676 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-07 00:57:50.208681 | orchestrator | Wednesday 07 January 2026 00:54:39 +0000 (0:00:02.882) 0:03:33.307 ***** 2026-01-07 00:57:50.208689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208718 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.208782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208808 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.208816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208869 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.208873 | orchestrator | 2026-01-07 00:57:50.208877 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-07 00:57:50.208882 | orchestrator | Wednesday 07 January 2026 00:54:42 +0000 (0:00:02.419) 0:03:35.727 ***** 2026-01-07 00:57:50.208886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208898 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.208937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208953 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.208957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:57:50.208962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:57:50.208965 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.208969 | orchestrator | 2026-01-07 00:57:50.208976 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-07 00:57:50.208979 | orchestrator | Wednesday 07 January 2026 00:54:44 +0000 (0:00:02.310) 0:03:38.037 ***** 2026-01-07 00:57:50.209010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209024 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209037 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:57:50.209049 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209053 | orchestrator | 2026-01-07 00:57:50.209056 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-07 00:57:50.209060 | orchestrator | Wednesday 07 January 2026 00:54:47 +0000 (0:00:02.738) 0:03:40.776 ***** 2026-01-07 00:57:50.209064 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.209068 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.209072 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.209075 | orchestrator | 2026-01-07 00:57:50.209081 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-07 00:57:50.209085 | orchestrator | Wednesday 07 January 2026 00:54:49 +0000 (0:00:01.778) 0:03:42.554 ***** 2026-01-07 00:57:50.209089 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209093 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209097 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209100 | orchestrator | 2026-01-07 00:57:50.209104 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-07 00:57:50.209111 | orchestrator | Wednesday 07 January 2026 00:54:50 +0000 (0:00:01.429) 0:03:43.984 ***** 2026-01-07 00:57:50.209142 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209149 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209153 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209157 | orchestrator | 2026-01-07 00:57:50.209161 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-07 00:57:50.209165 | orchestrator | Wednesday 07 January 2026 00:54:50 +0000 (0:00:00.316) 0:03:44.301 ***** 2026-01-07 00:57:50.209169 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.209172 | orchestrator | 2026-01-07 00:57:50.209176 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-07 00:57:50.209181 | orchestrator | Wednesday 07 January 2026 00:54:52 +0000 (0:00:01.447) 0:03:45.748 ***** 2026-01-07 00:57:50.209185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:57:50.209190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:57:50.209195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:57:50.209199 | orchestrator | 2026-01-07 00:57:50.209202 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-07 00:57:50.209206 | orchestrator | Wednesday 07 January 2026 00:54:53 +0000 (0:00:01.360) 0:03:47.108 ***** 2026-01-07 00:57:50.209210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:57:50.209255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:57:50.209262 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209266 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:57:50.209274 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209277 | orchestrator | 2026-01-07 00:57:50.209281 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-07 00:57:50.209286 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:00.429) 0:03:47.538 ***** 2026-01-07 00:57:50.209293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:57:50.209300 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:57:50.209312 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:57:50.209323 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209329 | orchestrator | 2026-01-07 00:57:50.209334 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-07 00:57:50.209340 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:00.866) 0:03:48.404 ***** 2026-01-07 00:57:50.209346 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209352 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209359 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209365 | orchestrator | 2026-01-07 00:57:50.209370 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-07 00:57:50.209376 | orchestrator | Wednesday 07 January 2026 00:54:55 +0000 (0:00:00.468) 0:03:48.873 ***** 2026-01-07 00:57:50.209381 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209393 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209400 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209407 | orchestrator | 2026-01-07 00:57:50.209412 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-07 00:57:50.209415 | orchestrator | Wednesday 07 January 2026 00:54:56 +0000 (0:00:01.278) 0:03:50.151 ***** 2026-01-07 00:57:50.209419 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.209423 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.209427 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.209431 | orchestrator | 2026-01-07 00:57:50.209434 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-07 00:57:50.209438 | orchestrator | Wednesday 07 January 2026 00:54:57 +0000 (0:00:00.327) 0:03:50.479 ***** 2026-01-07 00:57:50.209442 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.209445 | orchestrator | 2026-01-07 00:57:50.209449 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-07 00:57:50.209453 | orchestrator | Wednesday 07 January 2026 00:54:58 +0000 (0:00:01.439) 0:03:51.918 ***** 2026-01-07 00:57:50.209500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:57:50.209507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.209530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:57:50.209655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.209722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.209740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:57:50.209783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.209864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.209916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.209979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.209983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.209987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.209991 | orchestrator | 2026-01-07 00:57:50.209995 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-07 00:57:50.209999 | orchestrator | Wednesday 07 January 2026 00:55:02 +0000 (0:00:04.345) 0:03:56.263 ***** 2026-01-07 00:57:50.210075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:57:50.210083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.210107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:57:50.210175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.210255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.210309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210317 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.210354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:57:50.210417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.210449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210469 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.210548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:57:50.210559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:57:50.210701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.210705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:57:50.210709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:57:50.210716 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.210722 | orchestrator | 2026-01-07 00:57:50.210728 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-07 00:57:50.210740 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:01.614) 0:03:57.877 ***** 2026-01-07 00:57:50.210751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210766 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.210793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210808 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.210814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:57:50.210825 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.210829 | orchestrator | 2026-01-07 00:57:50.210833 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-07 00:57:50.210837 | orchestrator | Wednesday 07 January 2026 00:55:06 +0000 (0:00:02.085) 0:03:59.962 ***** 2026-01-07 00:57:50.210841 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.210844 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.210848 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.210852 | orchestrator | 2026-01-07 00:57:50.210856 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-07 00:57:50.210859 | orchestrator | Wednesday 07 January 2026 00:55:07 +0000 (0:00:01.313) 0:04:01.276 ***** 2026-01-07 00:57:50.210863 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.210867 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.210870 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.210874 | orchestrator | 2026-01-07 00:57:50.210878 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-07 00:57:50.210882 | orchestrator | Wednesday 07 January 2026 00:55:09 +0000 (0:00:02.102) 0:04:03.379 ***** 2026-01-07 00:57:50.210886 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.210889 | orchestrator | 2026-01-07 00:57:50.210893 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-07 00:57:50.210897 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:01.212) 0:04:04.591 ***** 2026-01-07 00:57:50.210901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.210914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.210938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.210943 | orchestrator | 2026-01-07 00:57:50.210947 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-07 00:57:50.210951 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:03.880) 0:04:08.472 ***** 2026-01-07 00:57:50.210956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.210960 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.210964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.210972 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.210976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.210981 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.210985 | orchestrator | 2026-01-07 00:57:50.210988 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-07 00:57:50.210992 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.515) 0:04:08.987 ***** 2026-01-07 00:57:50.210999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211009 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211033 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211045 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211049 | orchestrator | 2026-01-07 00:57:50.211053 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-07 00:57:50.211056 | orchestrator | Wednesday 07 January 2026 00:55:16 +0000 (0:00:00.713) 0:04:09.701 ***** 2026-01-07 00:57:50.211060 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211064 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211068 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211072 | orchestrator | 2026-01-07 00:57:50.211076 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-07 00:57:50.211079 | orchestrator | Wednesday 07 January 2026 00:55:17 +0000 (0:00:01.390) 0:04:11.092 ***** 2026-01-07 00:57:50.211083 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211087 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211091 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211094 | orchestrator | 2026-01-07 00:57:50.211098 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-07 00:57:50.211106 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:02.234) 0:04:13.327 ***** 2026-01-07 00:57:50.211110 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.211114 | orchestrator | 2026-01-07 00:57:50.211118 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-07 00:57:50.211122 | orchestrator | Wednesday 07 January 2026 00:55:21 +0000 (0:00:01.563) 0:04:14.890 ***** 2026-01-07 00:57:50.211128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.211133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.211166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.211195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211205 | orchestrator | 2026-01-07 00:57:50.211210 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-07 00:57:50.211218 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:05.026) 0:04:19.916 ***** 2026-01-07 00:57:50.211224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.211229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211238 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.211264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211277 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.211289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.211312 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211317 | orchestrator | 2026-01-07 00:57:50.211321 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-07 00:57:50.211326 | orchestrator | Wednesday 07 January 2026 00:55:27 +0000 (0:00:00.958) 0:04:20.875 ***** 2026-01-07 00:57:50.211333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211364 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211395 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:57:50.211426 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211433 | orchestrator | 2026-01-07 00:57:50.211440 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-07 00:57:50.211446 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:01.302) 0:04:22.178 ***** 2026-01-07 00:57:50.211453 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211459 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211466 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211472 | orchestrator | 2026-01-07 00:57:50.211478 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-07 00:57:50.211485 | orchestrator | Wednesday 07 January 2026 00:55:30 +0000 (0:00:01.419) 0:04:23.597 ***** 2026-01-07 00:57:50.211492 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211503 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211511 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211518 | orchestrator | 2026-01-07 00:57:50.211525 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-07 00:57:50.211535 | orchestrator | Wednesday 07 January 2026 00:55:32 +0000 (0:00:02.198) 0:04:25.796 ***** 2026-01-07 00:57:50.211541 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.211546 | orchestrator | 2026-01-07 00:57:50.211552 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-07 00:57:50.211587 | orchestrator | Wednesday 07 January 2026 00:55:33 +0000 (0:00:01.610) 0:04:27.407 ***** 2026-01-07 00:57:50.211594 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-07 00:57:50.211619 | orchestrator | 2026-01-07 00:57:50.211626 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-07 00:57:50.211632 | orchestrator | Wednesday 07 January 2026 00:55:34 +0000 (0:00:00.824) 0:04:28.231 ***** 2026-01-07 00:57:50.211639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:57:50.211646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:57:50.211653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:57:50.211659 | orchestrator | 2026-01-07 00:57:50.211663 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-07 00:57:50.211667 | orchestrator | Wednesday 07 January 2026 00:55:39 +0000 (0:00:04.410) 0:04:32.641 ***** 2026-01-07 00:57:50.211671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211675 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211683 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211702 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211706 | orchestrator | 2026-01-07 00:57:50.211710 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-07 00:57:50.211714 | orchestrator | Wednesday 07 January 2026 00:55:40 +0000 (0:00:01.383) 0:04:34.025 ***** 2026-01-07 00:57:50.211735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211745 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211757 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:57:50.211769 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211773 | orchestrator | 2026-01-07 00:57:50.211777 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:57:50.211780 | orchestrator | Wednesday 07 January 2026 00:55:42 +0000 (0:00:01.487) 0:04:35.513 ***** 2026-01-07 00:57:50.211784 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211788 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211792 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211796 | orchestrator | 2026-01-07 00:57:50.211800 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:57:50.211803 | orchestrator | Wednesday 07 January 2026 00:55:44 +0000 (0:00:02.555) 0:04:38.069 ***** 2026-01-07 00:57:50.211807 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.211811 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.211815 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.211819 | orchestrator | 2026-01-07 00:57:50.211822 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-07 00:57:50.211826 | orchestrator | Wednesday 07 January 2026 00:55:47 +0000 (0:00:03.172) 0:04:41.241 ***** 2026-01-07 00:57:50.211830 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-07 00:57:50.211834 | orchestrator | 2026-01-07 00:57:50.211838 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-07 00:57:50.211846 | orchestrator | Wednesday 07 January 2026 00:55:49 +0000 (0:00:01.390) 0:04:42.632 ***** 2026-01-07 00:57:50.211850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211855 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211863 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211887 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211891 | orchestrator | 2026-01-07 00:57:50.211895 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-07 00:57:50.211899 | orchestrator | Wednesday 07 January 2026 00:55:50 +0000 (0:00:01.202) 0:04:43.834 ***** 2026-01-07 00:57:50.211903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211907 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211914 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:57:50.211926 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211930 | orchestrator | 2026-01-07 00:57:50.211934 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-07 00:57:50.211938 | orchestrator | Wednesday 07 January 2026 00:55:51 +0000 (0:00:01.275) 0:04:45.110 ***** 2026-01-07 00:57:50.211941 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.211945 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.211949 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.211953 | orchestrator | 2026-01-07 00:57:50.211957 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:57:50.211960 | orchestrator | Wednesday 07 January 2026 00:55:53 +0000 (0:00:01.861) 0:04:46.972 ***** 2026-01-07 00:57:50.211964 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.211968 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.211972 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.211976 | orchestrator | 2026-01-07 00:57:50.211980 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:57:50.211984 | orchestrator | Wednesday 07 January 2026 00:55:55 +0000 (0:00:02.370) 0:04:49.343 ***** 2026-01-07 00:57:50.211988 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.211992 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.211995 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.211999 | orchestrator | 2026-01-07 00:57:50.212003 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-07 00:57:50.212007 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:03.315) 0:04:52.659 ***** 2026-01-07 00:57:50.212011 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-07 00:57:50.212015 | orchestrator | 2026-01-07 00:57:50.212018 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-07 00:57:50.212022 | orchestrator | Wednesday 07 January 2026 00:56:00 +0000 (0:00:00.835) 0:04:53.494 ***** 2026-01-07 00:57:50.212029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212033 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212055 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212063 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212069 | orchestrator | 2026-01-07 00:57:50.212075 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-07 00:57:50.212086 | orchestrator | Wednesday 07 January 2026 00:56:01 +0000 (0:00:01.305) 0:04:54.800 ***** 2026-01-07 00:57:50.212093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212100 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212113 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:57:50.212126 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212132 | orchestrator | 2026-01-07 00:57:50.212138 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-07 00:57:50.212144 | orchestrator | Wednesday 07 January 2026 00:56:02 +0000 (0:00:01.345) 0:04:56.145 ***** 2026-01-07 00:57:50.212151 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212167 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212173 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212180 | orchestrator | 2026-01-07 00:57:50.212186 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:57:50.212192 | orchestrator | Wednesday 07 January 2026 00:56:04 +0000 (0:00:01.552) 0:04:57.698 ***** 2026-01-07 00:57:50.212198 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.212204 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.212210 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.212216 | orchestrator | 2026-01-07 00:57:50.212222 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:57:50.212229 | orchestrator | Wednesday 07 January 2026 00:56:06 +0000 (0:00:02.549) 0:05:00.248 ***** 2026-01-07 00:57:50.212236 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.212243 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.212250 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.212256 | orchestrator | 2026-01-07 00:57:50.212263 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-07 00:57:50.212276 | orchestrator | Wednesday 07 January 2026 00:56:10 +0000 (0:00:03.383) 0:05:03.632 ***** 2026-01-07 00:57:50.212280 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.212284 | orchestrator | 2026-01-07 00:57:50.212288 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-07 00:57:50.212292 | orchestrator | Wednesday 07 January 2026 00:56:11 +0000 (0:00:01.593) 0:05:05.226 ***** 2026-01-07 00:57:50.212322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.212336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.212370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.212418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212453 | orchestrator | 2026-01-07 00:57:50.212457 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-07 00:57:50.212461 | orchestrator | Wednesday 07 January 2026 00:56:15 +0000 (0:00:03.381) 0:05:08.607 ***** 2026-01-07 00:57:50.212465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.212469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212505 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.212514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212551 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.212560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:57:50.212565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:57:50.212578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:57:50.212590 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212596 | orchestrator | 2026-01-07 00:57:50.212623 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-07 00:57:50.212630 | orchestrator | Wednesday 07 January 2026 00:56:15 +0000 (0:00:00.732) 0:05:09.339 ***** 2026-01-07 00:57:50.212636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212651 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212679 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:57:50.212690 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212694 | orchestrator | 2026-01-07 00:57:50.212698 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-07 00:57:50.212702 | orchestrator | Wednesday 07 January 2026 00:56:17 +0000 (0:00:01.473) 0:05:10.812 ***** 2026-01-07 00:57:50.212706 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.212709 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.212715 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.212721 | orchestrator | 2026-01-07 00:57:50.212727 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-07 00:57:50.212733 | orchestrator | Wednesday 07 January 2026 00:56:18 +0000 (0:00:01.490) 0:05:12.303 ***** 2026-01-07 00:57:50.212739 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.212745 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.212752 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.212758 | orchestrator | 2026-01-07 00:57:50.212764 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-07 00:57:50.212771 | orchestrator | Wednesday 07 January 2026 00:56:21 +0000 (0:00:02.278) 0:05:14.581 ***** 2026-01-07 00:57:50.212777 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.212780 | orchestrator | 2026-01-07 00:57:50.212784 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-07 00:57:50.212788 | orchestrator | Wednesday 07 January 2026 00:56:22 +0000 (0:00:01.384) 0:05:15.966 ***** 2026-01-07 00:57:50.212792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:57:50.212802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:57:50.212825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:57:50.212833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:57:50.212841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:57:50.212854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:57:50.212861 | orchestrator | 2026-01-07 00:57:50.212868 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-07 00:57:50.212874 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:05.618) 0:05:21.584 ***** 2026-01-07 00:57:50.212903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:57:50.212911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:57:50.212918 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.212924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:57:50.212936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:57:50.212944 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.212974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:57:50.212982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:57:50.212989 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.212995 | orchestrator | 2026-01-07 00:57:50.213001 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-07 00:57:50.213007 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.632) 0:05:22.217 ***** 2026-01-07 00:57:50.213014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:57:50.213020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213040 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.213045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:57:50.213052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213064 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.213070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:57:50.213078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:57:50.213086 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.213090 | orchestrator | 2026-01-07 00:57:50.213094 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-07 00:57:50.213098 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.936) 0:05:23.154 ***** 2026-01-07 00:57:50.213106 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.213110 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.213114 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.213117 | orchestrator | 2026-01-07 00:57:50.213121 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-07 00:57:50.213125 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:00.820) 0:05:23.974 ***** 2026-01-07 00:57:50.213129 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.213132 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.213136 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.213140 | orchestrator | 2026-01-07 00:57:50.213165 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-07 00:57:50.213170 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:01.316) 0:05:25.290 ***** 2026-01-07 00:57:50.213174 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.213178 | orchestrator | 2026-01-07 00:57:50.213182 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-07 00:57:50.213187 | orchestrator | Wednesday 07 January 2026 00:56:33 +0000 (0:00:01.470) 0:05:26.760 ***** 2026-01-07 00:57:50.213194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:57:50.213212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:57:50.213276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:57:50.213298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:57:50.213385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:57:50.213426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:57:50.213466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213495 | orchestrator | 2026-01-07 00:57:50.213502 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-07 00:57:50.213508 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:04.874) 0:05:31.635 ***** 2026-01-07 00:57:50.213514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:57:50.213525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:57:50.213575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:57:50.213653 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.213660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:57:50.213729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213754 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.213760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:57:50.213783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:57:50.213790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:57:50.213818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:57:50.213836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:57:50.213854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:57:50.213859 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.213865 | orchestrator | 2026-01-07 00:57:50.213871 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-07 00:57:50.213877 | orchestrator | Wednesday 07 January 2026 00:56:39 +0000 (0:00:01.224) 0:05:32.859 ***** 2026-01-07 00:57:50.213884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213915 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.213921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213955 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.213961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:57:50.213978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:57:50.213997 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214004 | orchestrator | 2026-01-07 00:57:50.214010 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-07 00:57:50.214050 | orchestrator | Wednesday 07 January 2026 00:56:40 +0000 (0:00:01.064) 0:05:33.923 ***** 2026-01-07 00:57:50.214057 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214064 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214070 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214077 | orchestrator | 2026-01-07 00:57:50.214084 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-07 00:57:50.214092 | orchestrator | Wednesday 07 January 2026 00:56:40 +0000 (0:00:00.427) 0:05:34.351 ***** 2026-01-07 00:57:50.214099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214105 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214112 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214119 | orchestrator | 2026-01-07 00:57:50.214125 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-07 00:57:50.214131 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:01.484) 0:05:35.835 ***** 2026-01-07 00:57:50.214137 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.214143 | orchestrator | 2026-01-07 00:57:50.214149 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-07 00:57:50.214156 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:01.772) 0:05:37.608 ***** 2026-01-07 00:57:50.214163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:57:50.214176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:57:50.214190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:57:50.214198 | orchestrator | 2026-01-07 00:57:50.214210 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-07 00:57:50.214216 | orchestrator | Wednesday 07 January 2026 00:56:46 +0000 (0:00:02.667) 0:05:40.276 ***** 2026-01-07 00:57:50.214224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:57:50.214231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:57:50.214244 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214251 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:57:50.214264 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214271 | orchestrator | 2026-01-07 00:57:50.214278 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-07 00:57:50.214284 | orchestrator | Wednesday 07 January 2026 00:56:47 +0000 (0:00:00.414) 0:05:40.691 ***** 2026-01-07 00:57:50.214291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:57:50.214298 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:57:50.214315 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:57:50.214328 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214334 | orchestrator | 2026-01-07 00:57:50.214341 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-07 00:57:50.214348 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:01.023) 0:05:41.714 ***** 2026-01-07 00:57:50.214358 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214364 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214371 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214378 | orchestrator | 2026-01-07 00:57:50.214385 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-07 00:57:50.214392 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:00.418) 0:05:42.133 ***** 2026-01-07 00:57:50.214399 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214406 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214413 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214420 | orchestrator | 2026-01-07 00:57:50.214427 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-07 00:57:50.214434 | orchestrator | Wednesday 07 January 2026 00:56:50 +0000 (0:00:01.355) 0:05:43.488 ***** 2026-01-07 00:57:50.214440 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:57:50.214447 | orchestrator | 2026-01-07 00:57:50.214453 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-07 00:57:50.214466 | orchestrator | Wednesday 07 January 2026 00:56:51 +0000 (0:00:01.782) 0:05:45.271 ***** 2026-01-07 00:57:50.214474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:57:50.214533 | orchestrator | 2026-01-07 00:57:50.214540 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-07 00:57:50.214547 | orchestrator | Wednesday 07 January 2026 00:56:58 +0000 (0:00:06.264) 0:05:51.535 ***** 2026-01-07 00:57:50.214553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214574 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214647 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:57:50.214673 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214679 | orchestrator | 2026-01-07 00:57:50.214693 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-07 00:57:50.214704 | orchestrator | Wednesday 07 January 2026 00:56:58 +0000 (0:00:00.632) 0:05:52.167 ***** 2026-01-07 00:57:50.214716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214743 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214774 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:57:50.214805 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214811 | orchestrator | 2026-01-07 00:57:50.214817 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-07 00:57:50.214823 | orchestrator | Wednesday 07 January 2026 00:57:00 +0000 (0:00:01.656) 0:05:53.824 ***** 2026-01-07 00:57:50.214829 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.214835 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.214841 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.214846 | orchestrator | 2026-01-07 00:57:50.214852 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-07 00:57:50.214858 | orchestrator | Wednesday 07 January 2026 00:57:01 +0000 (0:00:01.556) 0:05:55.380 ***** 2026-01-07 00:57:50.214863 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.214869 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.214881 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.214888 | orchestrator | 2026-01-07 00:57:50.214894 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-07 00:57:50.214899 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:02.361) 0:05:57.741 ***** 2026-01-07 00:57:50.214905 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214910 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214921 | orchestrator | 2026-01-07 00:57:50.214933 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-07 00:57:50.214938 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:00.327) 0:05:58.069 ***** 2026-01-07 00:57:50.214945 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214951 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214957 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.214962 | orchestrator | 2026-01-07 00:57:50.214968 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-07 00:57:50.214978 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:00.293) 0:05:58.363 ***** 2026-01-07 00:57:50.214984 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.214989 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.214995 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215001 | orchestrator | 2026-01-07 00:57:50.215007 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-07 00:57:50.215013 | orchestrator | Wednesday 07 January 2026 00:57:05 +0000 (0:00:00.651) 0:05:59.015 ***** 2026-01-07 00:57:50.215019 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215025 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215030 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215035 | orchestrator | 2026-01-07 00:57:50.215041 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-07 00:57:50.215047 | orchestrator | Wednesday 07 January 2026 00:57:05 +0000 (0:00:00.315) 0:05:59.330 ***** 2026-01-07 00:57:50.215053 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215059 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215064 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215071 | orchestrator | 2026-01-07 00:57:50.215076 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-07 00:57:50.215082 | orchestrator | Wednesday 07 January 2026 00:57:06 +0000 (0:00:00.308) 0:05:59.639 ***** 2026-01-07 00:57:50.215086 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215090 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215094 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215097 | orchestrator | 2026-01-07 00:57:50.215101 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-07 00:57:50.215105 | orchestrator | Wednesday 07 January 2026 00:57:07 +0000 (0:00:00.899) 0:06:00.538 ***** 2026-01-07 00:57:50.215109 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215113 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215116 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215120 | orchestrator | 2026-01-07 00:57:50.215124 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-07 00:57:50.215128 | orchestrator | Wednesday 07 January 2026 00:57:07 +0000 (0:00:00.700) 0:06:01.238 ***** 2026-01-07 00:57:50.215132 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215135 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215139 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215143 | orchestrator | 2026-01-07 00:57:50.215147 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-07 00:57:50.215150 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.335) 0:06:01.574 ***** 2026-01-07 00:57:50.215154 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215158 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215162 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215171 | orchestrator | 2026-01-07 00:57:50.215175 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-07 00:57:50.215179 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.842) 0:06:02.416 ***** 2026-01-07 00:57:50.215183 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215187 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215190 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215194 | orchestrator | 2026-01-07 00:57:50.215198 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-07 00:57:50.215202 | orchestrator | Wednesday 07 January 2026 00:57:10 +0000 (0:00:01.186) 0:06:03.603 ***** 2026-01-07 00:57:50.215206 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215209 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215213 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215217 | orchestrator | 2026-01-07 00:57:50.215221 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-07 00:57:50.215224 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.850) 0:06:04.453 ***** 2026-01-07 00:57:50.215228 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.215232 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.215236 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.215239 | orchestrator | 2026-01-07 00:57:50.215243 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-07 00:57:50.215247 | orchestrator | Wednesday 07 January 2026 00:57:16 +0000 (0:00:05.394) 0:06:09.847 ***** 2026-01-07 00:57:50.215251 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215255 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215258 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215262 | orchestrator | 2026-01-07 00:57:50.215266 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-07 00:57:50.215269 | orchestrator | Wednesday 07 January 2026 00:57:19 +0000 (0:00:02.740) 0:06:12.588 ***** 2026-01-07 00:57:50.215273 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.215277 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.215281 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.215284 | orchestrator | 2026-01-07 00:57:50.215288 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-07 00:57:50.215292 | orchestrator | Wednesday 07 January 2026 00:57:32 +0000 (0:00:13.385) 0:06:25.973 ***** 2026-01-07 00:57:50.215296 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215301 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215307 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215313 | orchestrator | 2026-01-07 00:57:50.215319 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-07 00:57:50.215333 | orchestrator | Wednesday 07 January 2026 00:57:33 +0000 (0:00:01.244) 0:06:27.218 ***** 2026-01-07 00:57:50.215339 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:57:50.215345 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:57:50.215351 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:57:50.215357 | orchestrator | 2026-01-07 00:57:50.215363 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-07 00:57:50.215373 | orchestrator | Wednesday 07 January 2026 00:57:44 +0000 (0:00:10.288) 0:06:37.506 ***** 2026-01-07 00:57:50.215380 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215386 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215392 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215398 | orchestrator | 2026-01-07 00:57:50.215404 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-07 00:57:50.215410 | orchestrator | Wednesday 07 January 2026 00:57:44 +0000 (0:00:00.360) 0:06:37.867 ***** 2026-01-07 00:57:50.215416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215427 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215432 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215438 | orchestrator | 2026-01-07 00:57:50.215444 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-07 00:57:50.215457 | orchestrator | Wednesday 07 January 2026 00:57:44 +0000 (0:00:00.336) 0:06:38.204 ***** 2026-01-07 00:57:50.215464 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215471 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215479 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215486 | orchestrator | 2026-01-07 00:57:50.215494 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-07 00:57:50.215499 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:00.686) 0:06:38.890 ***** 2026-01-07 00:57:50.215505 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215512 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215518 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215524 | orchestrator | 2026-01-07 00:57:50.215530 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-07 00:57:50.215536 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:00.375) 0:06:39.265 ***** 2026-01-07 00:57:50.215541 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215547 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215552 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215557 | orchestrator | 2026-01-07 00:57:50.215563 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-07 00:57:50.215568 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.359) 0:06:39.624 ***** 2026-01-07 00:57:50.215574 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:57:50.215580 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:57:50.215586 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:57:50.215591 | orchestrator | 2026-01-07 00:57:50.215597 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-07 00:57:50.215626 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.336) 0:06:39.960 ***** 2026-01-07 00:57:50.215633 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215639 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215646 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215651 | orchestrator | 2026-01-07 00:57:50.215657 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-07 00:57:50.215663 | orchestrator | Wednesday 07 January 2026 00:57:47 +0000 (0:00:01.410) 0:06:41.371 ***** 2026-01-07 00:57:50.215668 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:57:50.215674 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:57:50.215680 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:57:50.215686 | orchestrator | 2026-01-07 00:57:50.215692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:57:50.215699 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:57:50.215707 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:57:50.215712 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:57:50.215718 | orchestrator | 2026-01-07 00:57:50.215723 | orchestrator | 2026-01-07 00:57:50.215729 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:57:50.215735 | orchestrator | Wednesday 07 January 2026 00:57:48 +0000 (0:00:00.935) 0:06:42.306 ***** 2026-01-07 00:57:50.215740 | orchestrator | =============================================================================== 2026-01-07 00:57:50.215746 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.39s 2026-01-07 00:57:50.215751 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.29s 2026-01-07 00:57:50.215757 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.65s 2026-01-07 00:57:50.215762 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2026-01-07 00:57:50.215775 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.07s 2026-01-07 00:57:50.215780 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.63s 2026-01-07 00:57:50.215803 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.62s 2026-01-07 00:57:50.215809 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.39s 2026-01-07 00:57:50.215815 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.03s 2026-01-07 00:57:50.215820 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.87s 2026-01-07 00:57:50.215826 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.57s 2026-01-07 00:57:50.215832 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.57s 2026-01-07 00:57:50.215839 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.42s 2026-01-07 00:57:50.215846 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.41s 2026-01-07 00:57:50.215856 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.40s 2026-01-07 00:57:50.215862 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.35s 2026-01-07 00:57:50.215868 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2026-01-07 00:57:50.215874 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.22s 2026-01-07 00:57:50.215880 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.20s 2026-01-07 00:57:50.215886 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.11s 2026-01-07 00:57:50.215899 | orchestrator | 2026-01-07 00:57:50 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:50.215906 | orchestrator | 2026-01-07 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:53.244293 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:57:53.246038 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:53.247375 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:57:53.247660 | orchestrator | 2026-01-07 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:56.281644 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:57:56.284495 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:56.289405 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:57:56.289467 | orchestrator | 2026-01-07 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:59.328279 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:57:59.330418 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:57:59.331300 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:57:59.331316 | orchestrator | 2026-01-07 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:02.394155 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:02.396584 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:02.399505 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:02.399565 | orchestrator | 2026-01-07 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:05.440746 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:05.443386 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:05.443605 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:05.443627 | orchestrator | 2026-01-07 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:08.482374 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:08.483035 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:08.485539 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:08.485611 | orchestrator | 2026-01-07 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:11.534907 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:11.538346 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:11.540241 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:11.540738 | orchestrator | 2026-01-07 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:14.576302 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:14.577287 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:14.578147 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:14.578178 | orchestrator | 2026-01-07 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:17.614742 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:17.614794 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:17.614799 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:17.614803 | orchestrator | 2026-01-07 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:20.680730 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:20.682212 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:20.685179 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:20.685698 | orchestrator | 2026-01-07 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:23.765204 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:23.769645 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:23.772915 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:23.772986 | orchestrator | 2026-01-07 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:26.823087 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:26.824403 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:26.826196 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:26.826296 | orchestrator | 2026-01-07 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:29.880413 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:29.886078 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:29.890730 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:29.891281 | orchestrator | 2026-01-07 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:32.956021 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:32.957927 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:32.960222 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:32.960270 | orchestrator | 2026-01-07 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:36.010760 | orchestrator | 2026-01-07 00:58:36 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:36.012949 | orchestrator | 2026-01-07 00:58:36 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:36.016087 | orchestrator | 2026-01-07 00:58:36 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:36.016857 | orchestrator | 2026-01-07 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:39.057952 | orchestrator | 2026-01-07 00:58:39 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:39.058584 | orchestrator | 2026-01-07 00:58:39 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:39.059858 | orchestrator | 2026-01-07 00:58:39 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:39.059885 | orchestrator | 2026-01-07 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:42.111471 | orchestrator | 2026-01-07 00:58:42 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:42.112856 | orchestrator | 2026-01-07 00:58:42 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:42.114645 | orchestrator | 2026-01-07 00:58:42 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:42.114708 | orchestrator | 2026-01-07 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:45.158235 | orchestrator | 2026-01-07 00:58:45 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:45.158350 | orchestrator | 2026-01-07 00:58:45 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:45.161376 | orchestrator | 2026-01-07 00:58:45 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:45.161437 | orchestrator | 2026-01-07 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:48.213872 | orchestrator | 2026-01-07 00:58:48 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:48.217075 | orchestrator | 2026-01-07 00:58:48 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:48.220599 | orchestrator | 2026-01-07 00:58:48 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:48.220686 | orchestrator | 2026-01-07 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:51.258072 | orchestrator | 2026-01-07 00:58:51 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:51.259800 | orchestrator | 2026-01-07 00:58:51 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:51.262232 | orchestrator | 2026-01-07 00:58:51 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:51.262456 | orchestrator | 2026-01-07 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:54.309386 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:54.310854 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:54.312209 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:54.312328 | orchestrator | 2026-01-07 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:57.359702 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:58:57.362235 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:58:57.364478 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:58:57.364514 | orchestrator | 2026-01-07 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:00.418082 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:00.419883 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:00.421874 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:00.421924 | orchestrator | 2026-01-07 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:03.472897 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:03.474265 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:03.475726 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:03.475766 | orchestrator | 2026-01-07 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:06.520793 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:06.524576 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:06.526124 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:06.526156 | orchestrator | 2026-01-07 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:09.566407 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:09.566504 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:09.568446 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:09.568505 | orchestrator | 2026-01-07 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:12.610206 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:12.613551 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:12.615705 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:12.616857 | orchestrator | 2026-01-07 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:15.659516 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:15.663040 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:15.664969 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:15.665025 | orchestrator | 2026-01-07 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:18.740602 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:18.743451 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:18.746647 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:18.746722 | orchestrator | 2026-01-07 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:21.803205 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:21.805455 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:21.808525 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:21.808586 | orchestrator | 2026-01-07 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:24.861577 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:24.863640 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:24.865650 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:24.865693 | orchestrator | 2026-01-07 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:27.918892 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:27.922174 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:27.924489 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:27.924542 | orchestrator | 2026-01-07 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:30.967151 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:30.968813 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:30.970145 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:30.970238 | orchestrator | 2026-01-07 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:34.013231 | orchestrator | 2026-01-07 00:59:34 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:34.013450 | orchestrator | 2026-01-07 00:59:34 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:34.014553 | orchestrator | 2026-01-07 00:59:34 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:34.014588 | orchestrator | 2026-01-07 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:37.061526 | orchestrator | 2026-01-07 00:59:37 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:37.063510 | orchestrator | 2026-01-07 00:59:37 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:37.065621 | orchestrator | 2026-01-07 00:59:37 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:37.065783 | orchestrator | 2026-01-07 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:40.120877 | orchestrator | 2026-01-07 00:59:40 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:40.122495 | orchestrator | 2026-01-07 00:59:40 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state STARTED 2026-01-07 00:59:40.124324 | orchestrator | 2026-01-07 00:59:40 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:40.124487 | orchestrator | 2026-01-07 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:43.168098 | orchestrator | 2026-01-07 00:59:43 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:43.176443 | orchestrator | 2026-01-07 00:59:43 | INFO  | Task 508ee0b4-e355-4e9f-a604-bb9341ea9793 is in state SUCCESS 2026-01-07 00:59:43.178555 | orchestrator | 2026-01-07 00:59:43.178595 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:59:43.178601 | orchestrator | 2.16.14 2026-01-07 00:59:43.178605 | orchestrator | 2026-01-07 00:59:43.178609 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-07 00:59:43.178613 | orchestrator | 2026-01-07 00:59:43.178617 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 00:59:43.178621 | orchestrator | Wednesday 07 January 2026 00:48:36 +0000 (0:00:00.725) 0:00:00.725 ***** 2026-01-07 00:59:43.178625 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.178630 | orchestrator | 2026-01-07 00:59:43.178634 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 00:59:43.178638 | orchestrator | Wednesday 07 January 2026 00:48:38 +0000 (0:00:01.275) 0:00:02.001 ***** 2026-01-07 00:59:43.178642 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178646 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178650 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178654 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178657 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178661 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178664 | orchestrator | 2026-01-07 00:59:43.178668 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 00:59:43.178671 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:01.943) 0:00:03.944 ***** 2026-01-07 00:59:43.178674 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178690 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178693 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178696 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178699 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178702 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178705 | orchestrator | 2026-01-07 00:59:43.178708 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 00:59:43.178712 | orchestrator | Wednesday 07 January 2026 00:48:41 +0000 (0:00:00.868) 0:00:04.813 ***** 2026-01-07 00:59:43.178715 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178718 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178721 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178724 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178727 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178730 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178733 | orchestrator | 2026-01-07 00:59:43.178736 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 00:59:43.178739 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.995) 0:00:05.808 ***** 2026-01-07 00:59:43.178742 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178745 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178749 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178752 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178755 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178758 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178761 | orchestrator | 2026-01-07 00:59:43.178764 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 00:59:43.178767 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.859) 0:00:06.668 ***** 2026-01-07 00:59:43.178770 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178773 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178776 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178779 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178782 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178785 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178788 | orchestrator | 2026-01-07 00:59:43.178791 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 00:59:43.178794 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:00.737) 0:00:07.406 ***** 2026-01-07 00:59:43.178797 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178800 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178804 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178807 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178810 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178813 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178816 | orchestrator | 2026-01-07 00:59:43.178819 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 00:59:43.178822 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:01.120) 0:00:08.526 ***** 2026-01-07 00:59:43.178825 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.178829 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.178832 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.178835 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.178873 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.178877 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.178880 | orchestrator | 2026-01-07 00:59:43.178883 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 00:59:43.178893 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:01.001) 0:00:09.528 ***** 2026-01-07 00:59:43.178897 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178900 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178903 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178906 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178909 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178912 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178918 | orchestrator | 2026-01-07 00:59:43.178922 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 00:59:43.178925 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:01.073) 0:00:10.601 ***** 2026-01-07 00:59:43.178928 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:59:43.178931 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.178934 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.178938 | orchestrator | 2026-01-07 00:59:43.178941 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 00:59:43.178944 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.631) 0:00:11.233 ***** 2026-01-07 00:59:43.178947 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.178950 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.178953 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.178963 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.178966 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.178969 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.178973 | orchestrator | 2026-01-07 00:59:43.178976 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 00:59:43.178979 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:00.902) 0:00:12.135 ***** 2026-01-07 00:59:43.178982 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:59:43.178985 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.178988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.178991 | orchestrator | 2026-01-07 00:59:43.178994 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 00:59:43.178997 | orchestrator | Wednesday 07 January 2026 00:48:52 +0000 (0:00:03.653) 0:00:15.789 ***** 2026-01-07 00:59:43.179001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:59:43.179004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:59:43.179007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:59:43.179010 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179013 | orchestrator | 2026-01-07 00:59:43.179017 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 00:59:43.179020 | orchestrator | Wednesday 07 January 2026 00:48:52 +0000 (0:00:00.632) 0:00:16.421 ***** 2026-01-07 00:59:43.179024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179032 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179035 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179038 | orchestrator | 2026-01-07 00:59:43.179041 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 00:59:43.179067 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:00.923) 0:00:17.344 ***** 2026-01-07 00:59:43.179072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179087 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179090 | orchestrator | 2026-01-07 00:59:43.179094 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 00:59:43.179097 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:00.487) 0:00:17.832 ***** 2026-01-07 00:59:43.179104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:48:49.050799', 'end': '2026-01-07 00:48:49.346047', 'delta': '0:00:00.295248', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:48:50.358209', 'end': '2026-01-07 00:48:50.662864', 'delta': '0:00:00.304655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:48:51.585052', 'end': '2026-01-07 00:48:51.876642', 'delta': '0:00:00.291590', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.179121 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179127 | orchestrator | 2026-01-07 00:59:43.179132 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 00:59:43.179136 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:00.214) 0:00:18.047 ***** 2026-01-07 00:59:43.179142 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.179148 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.179323 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.179333 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.179336 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.179340 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.179377 | orchestrator | 2026-01-07 00:59:43.179382 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 00:59:43.179385 | orchestrator | Wednesday 07 January 2026 00:48:55 +0000 (0:00:01.708) 0:00:19.755 ***** 2026-01-07 00:59:43.179388 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.179391 | orchestrator | 2026-01-07 00:59:43.179395 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 00:59:43.179398 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:00.572) 0:00:20.328 ***** 2026-01-07 00:59:43.179401 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179405 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179408 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179411 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179414 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179417 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179420 | orchestrator | 2026-01-07 00:59:43.179423 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 00:59:43.179426 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:01.669) 0:00:21.997 ***** 2026-01-07 00:59:43.179429 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179433 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179436 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179439 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179442 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179445 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179448 | orchestrator | 2026-01-07 00:59:43.179451 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:59:43.179454 | orchestrator | Wednesday 07 January 2026 00:49:00 +0000 (0:00:02.543) 0:00:24.541 ***** 2026-01-07 00:59:43.179460 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179463 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179466 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179469 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179473 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179476 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179479 | orchestrator | 2026-01-07 00:59:43.179482 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 00:59:43.179485 | orchestrator | Wednesday 07 January 2026 00:49:02 +0000 (0:00:02.029) 0:00:26.570 ***** 2026-01-07 00:59:43.179488 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179491 | orchestrator | 2026-01-07 00:59:43.179494 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 00:59:43.179497 | orchestrator | Wednesday 07 January 2026 00:49:03 +0000 (0:00:00.309) 0:00:26.879 ***** 2026-01-07 00:59:43.179500 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179503 | orchestrator | 2026-01-07 00:59:43.179506 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:59:43.179510 | orchestrator | Wednesday 07 January 2026 00:49:03 +0000 (0:00:00.394) 0:00:27.273 ***** 2026-01-07 00:59:43.179513 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179516 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179519 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179526 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179530 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179536 | orchestrator | 2026-01-07 00:59:43.179539 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 00:59:43.179553 | orchestrator | Wednesday 07 January 2026 00:49:04 +0000 (0:00:01.058) 0:00:28.332 ***** 2026-01-07 00:59:43.179566 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179569 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179572 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179576 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179579 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179585 | orchestrator | 2026-01-07 00:59:43.179588 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 00:59:43.179591 | orchestrator | Wednesday 07 January 2026 00:49:05 +0000 (0:00:01.412) 0:00:29.745 ***** 2026-01-07 00:59:43.179594 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179597 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179601 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179604 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179607 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179610 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179613 | orchestrator | 2026-01-07 00:59:43.179616 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 00:59:43.179619 | orchestrator | Wednesday 07 January 2026 00:49:06 +0000 (0:00:00.902) 0:00:30.647 ***** 2026-01-07 00:59:43.179622 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179626 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179629 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179632 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179635 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179638 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179641 | orchestrator | 2026-01-07 00:59:43.179644 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 00:59:43.179647 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:00.795) 0:00:31.443 ***** 2026-01-07 00:59:43.179650 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179654 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179657 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179660 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179663 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179666 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179669 | orchestrator | 2026-01-07 00:59:43.179672 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 00:59:43.179675 | orchestrator | Wednesday 07 January 2026 00:49:08 +0000 (0:00:00.668) 0:00:32.112 ***** 2026-01-07 00:59:43.179678 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179681 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179685 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179688 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179691 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179694 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179697 | orchestrator | 2026-01-07 00:59:43.179834 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 00:59:43.179840 | orchestrator | Wednesday 07 January 2026 00:49:09 +0000 (0:00:00.884) 0:00:32.996 ***** 2026-01-07 00:59:43.179844 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.179849 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.179853 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.179858 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.179862 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.179867 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.179872 | orchestrator | 2026-01-07 00:59:43.179876 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 00:59:43.179881 | orchestrator | Wednesday 07 January 2026 00:49:10 +0000 (0:00:00.847) 0:00:33.844 ***** 2026-01-07 00:59:43.179890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc', 'dm-uuid-LVM-8bK9ULb58KIMrsCGmdMXR1IVFLBmguBSIutgTi2cmowlm638qyWdp3yczOl3SY0m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.179901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e', 'dm-uuid-LVM-XAwDBKXsEIC3fWQHPh980GebvskQX2lbzqEPkgZKUqKZnnP9ltkb2SFHiz002pst'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vSGfPS-hxuf-Lufz-XfkE-Ywjk-bxjG-7FXmso', 'scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463', 'scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z9iqRN-JMb3-ozU2-CggA-cwEO-iE1D-Q0xhxz', 'scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e', 'scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac', 'scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7', 'dm-uuid-LVM-DZLvgoHJB2dzrj4NMm2HmBFaLg5fGwVRHPF1iBjynLE7kXuSlbDawfn32gGQsT1u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af', 'dm-uuid-LVM-L4I3js6ulS27pfsMVBMrKX9few3BpmSOtHpsW7yBtNLn2YGAnjQ3XyLOFZUDY4vy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180219 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.180224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xtfbv2-27VD-p67v-3ENf-8igL-00RL-AN09d4', 'scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d', 'scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DX02z4-RWLg-SM3n-sJQS-j5mJ-wBkD-ipzyi4', 'scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843', 'scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e', 'scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82', 'dm-uuid-LVM-yQM0Ic07SLIwjKKbXxWvwWfr3QKZYWMVXtf7o6hMkya84FcGeHH44VtZxIsn328L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22', 'dm-uuid-LVM-MMe6XUI3c7bXIr2hZ1ceXdtZm1vNbondcwenie5XOI6Ph1DXfu59ts7jLTIYlfPa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180282 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.180286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DrbFv-KJwt-cB5g-wqzQ-T0K1-euBu-O9L2Ra', 'scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7', 'scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Ejbey-1Tkt-iJTC-9Pct-AbyH-T5VC-rNly8E', 'scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616', 'scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5', 'scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180473 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.180477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180630 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.180633 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.180637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:59:43.180676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part1', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part14', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part15', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part16', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:59:43.180692 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.180695 | orchestrator | 2026-01-07 00:59:43.180698 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 00:59:43.180701 | orchestrator | Wednesday 07 January 2026 00:49:11 +0000 (0:00:01.838) 0:00:35.682 ***** 2026-01-07 00:59:43.180708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc', 'dm-uuid-LVM-8bK9ULb58KIMrsCGmdMXR1IVFLBmguBSIutgTi2cmowlm638qyWdp3yczOl3SY0m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7', 'dm-uuid-LVM-DZLvgoHJB2dzrj4NMm2HmBFaLg5fGwVRHPF1iBjynLE7kXuSlbDawfn32gGQsT1u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af', 'dm-uuid-LVM-L4I3js6ulS27pfsMVBMrKX9few3BpmSOtHpsW7yBtNLn2YGAnjQ3XyLOFZUDY4vy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180720 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e', 'dm-uuid-LVM-XAwDBKXsEIC3fWQHPh980GebvskQX2lbzqEPkgZKUqKZnnP9ltkb2SFHiz002pst'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180755 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180778 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180818 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180833 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180853 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180879 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.180998 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82', 'dm-uuid-LVM-yQM0Ic07SLIwjKKbXxWvwWfr3QKZYWMVXtf7o6hMkya84FcGeHH44VtZxIsn328L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181016 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181039 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xtfbv2-27VD-p67v-3ENf-8igL-00RL-AN09d4', 'scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d', 'scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181070 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22', 'dm-uuid-LVM-MMe6XUI3c7bXIr2hZ1ceXdtZm1vNbondcwenie5XOI6Ph1DXfu59ts7jLTIYlfPa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DX02z4-RWLg-SM3n-sJQS-j5mJ-wBkD-ipzyi4', 'scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843', 'scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e', 'scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181115 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181129 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part1', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part14', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part15', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part16', 'scsi-SQEMU_QEMU_HARDDISK_2bd50aac-7288-4b67-9b89-7e8f2f739bb4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181150 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181278 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181294 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181298 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd5a3f23-fcfd-47ca-822c-e3718156259e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181336 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181400 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a8e12e9-2702-442e-8e8b-1bb37c249997-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181411 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.181416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181419 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.181434 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181444 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.181450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vSGfPS-hxuf-Lufz-XfkE-Ywjk-bxjG-7FXmso', 'scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463', 'scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z9iqRN-JMb3-ozU2-CggA-cwEO-iE1D-Q0xhxz', 'scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e', 'scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac', 'scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181503 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DrbFv-KJwt-cB5g-wqzQ-T0K1-euBu-O9L2Ra', 'scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7', 'scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Ejbey-1Tkt-iJTC-9Pct-AbyH-T5VC-rNly8E', 'scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616', 'scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5', 'scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:59:43.181543 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181547 | orchestrator | 2026-01-07 00:59:43.181557 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 00:59:43.181560 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:01.470) 0:00:37.153 ***** 2026-01-07 00:59:43.181564 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.181567 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.181570 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.181573 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.181576 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.181580 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.181583 | orchestrator | 2026-01-07 00:59:43.181586 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 00:59:43.181589 | orchestrator | Wednesday 07 January 2026 00:49:14 +0000 (0:00:01.474) 0:00:38.627 ***** 2026-01-07 00:59:43.181592 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.181595 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.181598 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.181601 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.181604 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.181607 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.181611 | orchestrator | 2026-01-07 00:59:43.181614 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:59:43.181617 | orchestrator | Wednesday 07 January 2026 00:49:15 +0000 (0:00:00.692) 0:00:39.319 ***** 2026-01-07 00:59:43.181620 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181623 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181626 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181629 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.181632 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.181635 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.181639 | orchestrator | 2026-01-07 00:59:43.181642 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:59:43.181645 | orchestrator | Wednesday 07 January 2026 00:49:16 +0000 (0:00:01.011) 0:00:40.331 ***** 2026-01-07 00:59:43.181651 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181655 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181658 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181661 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.181664 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.181667 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.181670 | orchestrator | 2026-01-07 00:59:43.181673 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:59:43.181676 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:01.020) 0:00:41.351 ***** 2026-01-07 00:59:43.181679 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181683 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181686 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181689 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.181692 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.181695 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.181698 | orchestrator | 2026-01-07 00:59:43.181701 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:59:43.181704 | orchestrator | Wednesday 07 January 2026 00:49:19 +0000 (0:00:02.282) 0:00:43.634 ***** 2026-01-07 00:59:43.181707 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181710 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181714 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181717 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.181720 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.181723 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.181726 | orchestrator | 2026-01-07 00:59:43.181729 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 00:59:43.181732 | orchestrator | Wednesday 07 January 2026 00:49:20 +0000 (0:00:01.117) 0:00:44.751 ***** 2026-01-07 00:59:43.181735 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 00:59:43.181774 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 00:59:43.181778 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 00:59:43.181781 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 00:59:43.181784 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 00:59:43.181787 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 00:59:43.181791 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 00:59:43.181794 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 00:59:43.181797 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:59:43.181800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 00:59:43.181803 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-07 00:59:43.181806 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 00:59:43.181809 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-07 00:59:43.181812 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-07 00:59:43.181818 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 00:59:43.181821 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-07 00:59:43.181824 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-07 00:59:43.181827 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-07 00:59:43.181830 | orchestrator | 2026-01-07 00:59:43.181834 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 00:59:43.181837 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:04.789) 0:00:49.541 ***** 2026-01-07 00:59:43.181840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:59:43.181843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:59:43.181846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:59:43.181849 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.181855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:59:43.181858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:59:43.181861 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:59:43.181864 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.181868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:59:43.181879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:59:43.181883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:59:43.181886 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.181889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:59:43.181892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:59:43.181896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:59:43.181899 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182059 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:59:43.182066 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:59:43.182070 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:59:43.182074 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182077 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:59:43.182081 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:59:43.182085 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:59:43.182088 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182092 | orchestrator | 2026-01-07 00:59:43.182096 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 00:59:43.182099 | orchestrator | Wednesday 07 January 2026 00:49:27 +0000 (0:00:01.490) 0:00:51.031 ***** 2026-01-07 00:59:43.182103 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182107 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182110 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182114 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.182118 | orchestrator | 2026-01-07 00:59:43.182122 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:59:43.182126 | orchestrator | Wednesday 07 January 2026 00:49:29 +0000 (0:00:02.246) 0:00:53.278 ***** 2026-01-07 00:59:43.182129 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182133 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182137 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182140 | orchestrator | 2026-01-07 00:59:43.182145 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:59:43.182151 | orchestrator | Wednesday 07 January 2026 00:49:30 +0000 (0:00:00.523) 0:00:53.802 ***** 2026-01-07 00:59:43.182156 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182161 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182166 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182171 | orchestrator | 2026-01-07 00:59:43.182177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:59:43.182182 | orchestrator | Wednesday 07 January 2026 00:49:30 +0000 (0:00:00.589) 0:00:54.391 ***** 2026-01-07 00:59:43.182187 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182192 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182197 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182202 | orchestrator | 2026-01-07 00:59:43.182206 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:59:43.182211 | orchestrator | Wednesday 07 January 2026 00:49:31 +0000 (0:00:00.966) 0:00:55.358 ***** 2026-01-07 00:59:43.182216 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182222 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182233 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182237 | orchestrator | 2026-01-07 00:59:43.182243 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:59:43.182248 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:00.657) 0:00:56.016 ***** 2026-01-07 00:59:43.182254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.182259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.182264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.182269 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182274 | orchestrator | 2026-01-07 00:59:43.182279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:59:43.182431 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:00.394) 0:00:56.410 ***** 2026-01-07 00:59:43.182440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.182445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.182450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.182455 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182460 | orchestrator | 2026-01-07 00:59:43.182473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:59:43.182480 | orchestrator | Wednesday 07 January 2026 00:49:33 +0000 (0:00:00.536) 0:00:56.947 ***** 2026-01-07 00:59:43.182486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.182490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.182495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.182500 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182505 | orchestrator | 2026-01-07 00:59:43.182509 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:59:43.182514 | orchestrator | Wednesday 07 January 2026 00:49:33 +0000 (0:00:00.461) 0:00:57.408 ***** 2026-01-07 00:59:43.182518 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182523 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182529 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182533 | orchestrator | 2026-01-07 00:59:43.182538 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:59:43.182544 | orchestrator | Wednesday 07 January 2026 00:49:34 +0000 (0:00:00.810) 0:00:58.218 ***** 2026-01-07 00:59:43.182549 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:59:43.182554 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:59:43.182580 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:59:43.182584 | orchestrator | 2026-01-07 00:59:43.182588 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 00:59:43.182591 | orchestrator | Wednesday 07 January 2026 00:49:36 +0000 (0:00:01.807) 0:01:00.026 ***** 2026-01-07 00:59:43.182594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:59:43.182598 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.182601 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.182604 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:59:43.182607 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:59:43.182610 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:59:43.182613 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:59:43.182616 | orchestrator | 2026-01-07 00:59:43.182620 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 00:59:43.182623 | orchestrator | Wednesday 07 January 2026 00:49:37 +0000 (0:00:00.785) 0:01:00.811 ***** 2026-01-07 00:59:43.182626 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:59:43.182635 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.182638 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.182641 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:59:43.182644 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:59:43.182647 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:59:43.182651 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:59:43.182654 | orchestrator | 2026-01-07 00:59:43.182657 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.182660 | orchestrator | Wednesday 07 January 2026 00:49:38 +0000 (0:00:01.934) 0:01:02.746 ***** 2026-01-07 00:59:43.182688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.182693 | orchestrator | 2026-01-07 00:59:43.182696 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.182699 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:01.270) 0:01:04.016 ***** 2026-01-07 00:59:43.182703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.182706 | orchestrator | 2026-01-07 00:59:43.182709 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.182712 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:01.216) 0:01:05.233 ***** 2026-01-07 00:59:43.182715 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182718 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182721 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182725 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.182728 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.182731 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.182734 | orchestrator | 2026-01-07 00:59:43.182737 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.182740 | orchestrator | Wednesday 07 January 2026 00:49:43 +0000 (0:00:01.541) 0:01:06.774 ***** 2026-01-07 00:59:43.182743 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182750 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182753 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182756 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182759 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182762 | orchestrator | 2026-01-07 00:59:43.182765 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.182768 | orchestrator | Wednesday 07 January 2026 00:49:44 +0000 (0:00:01.416) 0:01:08.190 ***** 2026-01-07 00:59:43.182772 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182777 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182781 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182784 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182787 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182790 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182793 | orchestrator | 2026-01-07 00:59:43.182796 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.182800 | orchestrator | Wednesday 07 January 2026 00:49:45 +0000 (0:00:01.433) 0:01:09.624 ***** 2026-01-07 00:59:43.182803 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182809 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182812 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182818 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182821 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182824 | orchestrator | 2026-01-07 00:59:43.182827 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.182830 | orchestrator | Wednesday 07 January 2026 00:49:47 +0000 (0:00:01.352) 0:01:10.976 ***** 2026-01-07 00:59:43.182833 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182836 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182839 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182842 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.182846 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.182861 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.182864 | orchestrator | 2026-01-07 00:59:43.182868 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.182871 | orchestrator | Wednesday 07 January 2026 00:49:50 +0000 (0:00:02.803) 0:01:13.780 ***** 2026-01-07 00:59:43.182874 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182877 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182880 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182883 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182886 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182889 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182893 | orchestrator | 2026-01-07 00:59:43.182896 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.182899 | orchestrator | Wednesday 07 January 2026 00:49:51 +0000 (0:00:01.051) 0:01:14.832 ***** 2026-01-07 00:59:43.182902 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182905 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182908 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182911 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182914 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.182917 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.182920 | orchestrator | 2026-01-07 00:59:43.182924 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.182927 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:00.958) 0:01:15.790 ***** 2026-01-07 00:59:43.182930 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182933 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182936 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182939 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.182942 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.182945 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.182948 | orchestrator | 2026-01-07 00:59:43.182951 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.182955 | orchestrator | Wednesday 07 January 2026 00:49:53 +0000 (0:00:01.666) 0:01:17.457 ***** 2026-01-07 00:59:43.182958 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.182961 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.182964 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.182967 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.182970 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.182973 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.182976 | orchestrator | 2026-01-07 00:59:43.182979 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.182982 | orchestrator | Wednesday 07 January 2026 00:49:55 +0000 (0:00:01.672) 0:01:19.130 ***** 2026-01-07 00:59:43.182985 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.182988 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.182992 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.182995 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.182998 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183001 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183004 | orchestrator | 2026-01-07 00:59:43.183007 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.183012 | orchestrator | Wednesday 07 January 2026 00:49:56 +0000 (0:00:00.715) 0:01:19.845 ***** 2026-01-07 00:59:43.183016 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183019 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183022 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183025 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183028 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183031 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183034 | orchestrator | 2026-01-07 00:59:43.183037 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.183040 | orchestrator | Wednesday 07 January 2026 00:49:56 +0000 (0:00:00.723) 0:01:20.568 ***** 2026-01-07 00:59:43.183044 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183047 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183050 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183053 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183056 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183060 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183063 | orchestrator | 2026-01-07 00:59:43.183067 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.183071 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.601) 0:01:21.169 ***** 2026-01-07 00:59:43.183075 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183078 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183082 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183085 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183089 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183093 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183097 | orchestrator | 2026-01-07 00:59:43.183100 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.183104 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:00.651) 0:01:21.821 ***** 2026-01-07 00:59:43.183109 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183113 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183116 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183120 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183124 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183127 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183131 | orchestrator | 2026-01-07 00:59:43.183135 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.183138 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:00.627) 0:01:22.448 ***** 2026-01-07 00:59:43.183142 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183146 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183149 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183153 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183157 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183160 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183164 | orchestrator | 2026-01-07 00:59:43.183168 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.183171 | orchestrator | Wednesday 07 January 2026 00:49:59 +0000 (0:00:00.933) 0:01:23.382 ***** 2026-01-07 00:59:43.183175 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183179 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183182 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183186 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183200 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183204 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183208 | orchestrator | 2026-01-07 00:59:43.183212 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.183215 | orchestrator | Wednesday 07 January 2026 00:50:00 +0000 (0:00:00.776) 0:01:24.159 ***** 2026-01-07 00:59:43.183219 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183223 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183229 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183233 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183236 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183240 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183244 | orchestrator | 2026-01-07 00:59:43.183247 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.183251 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:01.040) 0:01:25.199 ***** 2026-01-07 00:59:43.183254 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183258 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183262 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183265 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183269 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183273 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183276 | orchestrator | 2026-01-07 00:59:43.183280 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.183284 | orchestrator | Wednesday 07 January 2026 00:50:02 +0000 (0:00:00.895) 0:01:26.095 ***** 2026-01-07 00:59:43.183287 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183291 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183294 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183298 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183301 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183305 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183309 | orchestrator | 2026-01-07 00:59:43.183312 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-07 00:59:43.183316 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:01.576) 0:01:27.672 ***** 2026-01-07 00:59:43.183320 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.183323 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.183327 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.183331 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.183334 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.183338 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.183363 | orchestrator | 2026-01-07 00:59:43.183367 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-07 00:59:43.183371 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:02.168) 0:01:29.840 ***** 2026-01-07 00:59:43.183375 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.183378 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.183382 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.183385 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.183389 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.183393 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.183396 | orchestrator | 2026-01-07 00:59:43.183400 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-07 00:59:43.183404 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:02.630) 0:01:32.470 ***** 2026-01-07 00:59:43.183407 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.183411 | orchestrator | 2026-01-07 00:59:43.183415 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-07 00:59:43.183419 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:01.267) 0:01:33.738 ***** 2026-01-07 00:59:43.183422 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183426 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183430 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183433 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183437 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183441 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183444 | orchestrator | 2026-01-07 00:59:43.183448 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-07 00:59:43.183452 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:00.734) 0:01:34.472 ***** 2026-01-07 00:59:43.183458 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183461 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183464 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183467 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183470 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183473 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183476 | orchestrator | 2026-01-07 00:59:43.183479 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-07 00:59:43.183484 | orchestrator | Wednesday 07 January 2026 00:50:11 +0000 (0:00:00.909) 0:01:35.382 ***** 2026-01-07 00:59:43.183487 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183490 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183494 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183497 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183500 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183503 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:59:43.183506 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183509 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183512 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183515 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183529 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183532 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:59:43.183535 | orchestrator | 2026-01-07 00:59:43.183539 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-07 00:59:43.183542 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:01.516) 0:01:36.898 ***** 2026-01-07 00:59:43.183545 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.183548 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.183551 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.183554 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.183557 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.183560 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.183564 | orchestrator | 2026-01-07 00:59:43.183567 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-07 00:59:43.183570 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:01.309) 0:01:38.207 ***** 2026-01-07 00:59:43.183573 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183576 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183579 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183582 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183585 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183588 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183591 | orchestrator | 2026-01-07 00:59:43.183594 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-07 00:59:43.183597 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:00.620) 0:01:38.828 ***** 2026-01-07 00:59:43.183601 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183604 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183607 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183610 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183613 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183616 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183619 | orchestrator | 2026-01-07 00:59:43.183622 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-07 00:59:43.183627 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:00.906) 0:01:39.734 ***** 2026-01-07 00:59:43.183631 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183634 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183637 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183640 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183643 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183646 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183649 | orchestrator | 2026-01-07 00:59:43.183652 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-07 00:59:43.183655 | orchestrator | Wednesday 07 January 2026 00:50:16 +0000 (0:00:00.713) 0:01:40.448 ***** 2026-01-07 00:59:43.183659 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.183662 | orchestrator | 2026-01-07 00:59:43.183665 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-07 00:59:43.183668 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:01.684) 0:01:42.132 ***** 2026-01-07 00:59:43.183671 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183674 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183677 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183680 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183683 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183687 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183690 | orchestrator | 2026-01-07 00:59:43.183693 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-07 00:59:43.183696 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:52.639) 0:02:34.771 ***** 2026-01-07 00:59:43.183699 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183702 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183705 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183708 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183711 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183714 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183718 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183721 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183725 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183729 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183732 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183735 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183738 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183741 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183744 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183750 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183753 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183757 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183760 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183771 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:59:43.183775 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:59:43.183781 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:59:43.183784 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183787 | orchestrator | 2026-01-07 00:59:43.183790 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-07 00:59:43.183793 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.703) 0:02:35.475 ***** 2026-01-07 00:59:43.183796 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183799 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183802 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183812 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183815 | orchestrator | 2026-01-07 00:59:43.183818 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-07 00:59:43.183821 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.877) 0:02:36.353 ***** 2026-01-07 00:59:43.183824 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183827 | orchestrator | 2026-01-07 00:59:43.183830 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-07 00:59:43.183833 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.156) 0:02:36.509 ***** 2026-01-07 00:59:43.183837 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183840 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183843 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183849 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183852 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183855 | orchestrator | 2026-01-07 00:59:43.183858 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-07 00:59:43.183861 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.745) 0:02:37.254 ***** 2026-01-07 00:59:43.183864 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183867 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183870 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183873 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183877 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183880 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183883 | orchestrator | 2026-01-07 00:59:43.183886 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-07 00:59:43.183889 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:01.239) 0:02:38.493 ***** 2026-01-07 00:59:43.183892 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183895 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.183898 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.183901 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.183905 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.183908 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.183911 | orchestrator | 2026-01-07 00:59:43.183914 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-07 00:59:43.183917 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:00.888) 0:02:39.382 ***** 2026-01-07 00:59:43.183920 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183923 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183926 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183929 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183932 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183935 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183939 | orchestrator | 2026-01-07 00:59:43.183942 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-07 00:59:43.183945 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:03.192) 0:02:42.575 ***** 2026-01-07 00:59:43.183948 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.183955 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.183958 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.183961 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.183964 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.183967 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.183970 | orchestrator | 2026-01-07 00:59:43.183973 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-07 00:59:43.183976 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.760) 0:02:43.335 ***** 2026-01-07 00:59:43.183980 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.183983 | orchestrator | 2026-01-07 00:59:43.183986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-07 00:59:43.183991 | orchestrator | Wednesday 07 January 2026 00:51:21 +0000 (0:00:01.483) 0:02:44.818 ***** 2026-01-07 00:59:43.183995 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.183998 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184001 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184004 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184007 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184010 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184013 | orchestrator | 2026-01-07 00:59:43.184016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-07 00:59:43.184019 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:01.391) 0:02:46.210 ***** 2026-01-07 00:59:43.184023 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184026 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184029 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184032 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184035 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184038 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184041 | orchestrator | 2026-01-07 00:59:43.184044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-07 00:59:43.184047 | orchestrator | Wednesday 07 January 2026 00:51:23 +0000 (0:00:00.891) 0:02:47.102 ***** 2026-01-07 00:59:43.184050 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184053 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184065 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184069 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184072 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184075 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184078 | orchestrator | 2026-01-07 00:59:43.184081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-07 00:59:43.184084 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:01.065) 0:02:48.168 ***** 2026-01-07 00:59:43.184087 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184091 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184094 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184097 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184100 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184103 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184106 | orchestrator | 2026-01-07 00:59:43.184109 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-07 00:59:43.184112 | orchestrator | Wednesday 07 January 2026 00:51:25 +0000 (0:00:00.935) 0:02:49.104 ***** 2026-01-07 00:59:43.184115 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184118 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184121 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184124 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184128 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184131 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184134 | orchestrator | 2026-01-07 00:59:43.184137 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-07 00:59:43.184142 | orchestrator | Wednesday 07 January 2026 00:51:26 +0000 (0:00:01.435) 0:02:50.539 ***** 2026-01-07 00:59:43.184145 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184148 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184153 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184158 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184164 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184169 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184174 | orchestrator | 2026-01-07 00:59:43.184182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-07 00:59:43.184188 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:00.742) 0:02:51.282 ***** 2026-01-07 00:59:43.184194 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184198 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184203 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184208 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184213 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184218 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184223 | orchestrator | 2026-01-07 00:59:43.184228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-07 00:59:43.184233 | orchestrator | Wednesday 07 January 2026 00:51:28 +0000 (0:00:01.156) 0:02:52.438 ***** 2026-01-07 00:59:43.184238 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184244 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184249 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184254 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184259 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184264 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184269 | orchestrator | 2026-01-07 00:59:43.184273 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-07 00:59:43.184278 | orchestrator | Wednesday 07 January 2026 00:51:29 +0000 (0:00:00.852) 0:02:53.291 ***** 2026-01-07 00:59:43.184284 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.184289 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.184294 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.184299 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.184302 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.184305 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.184308 | orchestrator | 2026-01-07 00:59:43.184311 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-07 00:59:43.184314 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:01.862) 0:02:55.153 ***** 2026-01-07 00:59:43.184318 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.184321 | orchestrator | 2026-01-07 00:59:43.184324 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-07 00:59:43.184327 | orchestrator | Wednesday 07 January 2026 00:51:32 +0000 (0:00:01.370) 0:02:56.524 ***** 2026-01-07 00:59:43.184330 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-07 00:59:43.184333 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-07 00:59:43.184337 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184340 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184358 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-07 00:59:43.184361 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184366 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-07 00:59:43.184372 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-07 00:59:43.184379 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184384 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-07 00:59:43.184393 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184399 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184404 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184412 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184417 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-07 00:59:43.184422 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-07 00:59:43.184476 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184491 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184497 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184500 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-07 00:59:43.184503 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184509 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184513 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184516 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184519 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-07 00:59:43.184522 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184525 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-07 00:59:43.184537 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184540 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184543 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184550 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-07 00:59:43.184556 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184559 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184562 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184568 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184573 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:59:43.184578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184585 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184597 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184602 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184607 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:59:43.184616 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184621 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184626 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184641 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:59:43.184646 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184654 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184657 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184660 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184664 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184667 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:59:43.184670 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184673 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184676 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184679 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:59:43.184685 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184688 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184706 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-07 00:59:43.184710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184713 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184716 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:59:43.184722 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-07 00:59:43.184725 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184728 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-07 00:59:43.184732 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184738 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:59:43.184741 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-07 00:59:43.184744 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-07 00:59:43.184747 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-07 00:59:43.184750 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-07 00:59:43.184753 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-07 00:59:43.184756 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-07 00:59:43.184762 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-07 00:59:43.184765 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-07 00:59:43.184768 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-07 00:59:43.184772 | orchestrator | 2026-01-07 00:59:43.184775 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-07 00:59:43.184778 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:07.117) 0:03:03.641 ***** 2026-01-07 00:59:43.184781 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184784 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184787 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184791 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.184794 | orchestrator | 2026-01-07 00:59:43.184797 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-07 00:59:43.184801 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:01.199) 0:03:04.840 ***** 2026-01-07 00:59:43.184804 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184807 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184810 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184813 | orchestrator | 2026-01-07 00:59:43.184817 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-07 00:59:43.184820 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:01.068) 0:03:05.909 ***** 2026-01-07 00:59:43.184823 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184826 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184829 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.184832 | orchestrator | 2026-01-07 00:59:43.184835 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-07 00:59:43.184839 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:01.453) 0:03:07.362 ***** 2026-01-07 00:59:43.184842 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.184845 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.184848 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.184851 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184856 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184859 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184862 | orchestrator | 2026-01-07 00:59:43.184865 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-07 00:59:43.184869 | orchestrator | Wednesday 07 January 2026 00:51:44 +0000 (0:00:01.253) 0:03:08.616 ***** 2026-01-07 00:59:43.184872 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.184875 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.184878 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.184881 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184884 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184887 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184890 | orchestrator | 2026-01-07 00:59:43.184893 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-07 00:59:43.184897 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:01.127) 0:03:09.743 ***** 2026-01-07 00:59:43.184900 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184903 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184906 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184912 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184915 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184918 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184921 | orchestrator | 2026-01-07 00:59:43.184934 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-07 00:59:43.184937 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.804) 0:03:10.548 ***** 2026-01-07 00:59:43.184940 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184944 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184947 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184950 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184953 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184956 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184959 | orchestrator | 2026-01-07 00:59:43.184962 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-07 00:59:43.184965 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:01.181) 0:03:11.730 ***** 2026-01-07 00:59:43.184968 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184971 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.184974 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.184977 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.184981 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.184984 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.184987 | orchestrator | 2026-01-07 00:59:43.184990 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-07 00:59:43.184993 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:00.699) 0:03:12.429 ***** 2026-01-07 00:59:43.184996 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.184999 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185002 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185005 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185008 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185011 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185014 | orchestrator | 2026-01-07 00:59:43.185018 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-07 00:59:43.185021 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.725) 0:03:13.155 ***** 2026-01-07 00:59:43.185024 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185027 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185030 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185033 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185036 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185039 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185042 | orchestrator | 2026-01-07 00:59:43.185045 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-07 00:59:43.185049 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.598) 0:03:13.754 ***** 2026-01-07 00:59:43.185052 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185055 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185058 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185061 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185064 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185067 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185070 | orchestrator | 2026-01-07 00:59:43.185073 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-07 00:59:43.185076 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:00.667) 0:03:14.421 ***** 2026-01-07 00:59:43.185079 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185083 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185086 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185089 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.185095 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.185098 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.185101 | orchestrator | 2026-01-07 00:59:43.185104 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-07 00:59:43.185107 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:02.664) 0:03:17.086 ***** 2026-01-07 00:59:43.185110 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.185113 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.185116 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.185119 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185123 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185126 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185129 | orchestrator | 2026-01-07 00:59:43.185132 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-07 00:59:43.185135 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:00.759) 0:03:17.845 ***** 2026-01-07 00:59:43.185138 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.185141 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.185144 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.185147 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185150 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185153 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185157 | orchestrator | 2026-01-07 00:59:43.185160 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-07 00:59:43.185165 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:00.627) 0:03:18.472 ***** 2026-01-07 00:59:43.185168 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185171 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185174 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185177 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185180 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185183 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185187 | orchestrator | 2026-01-07 00:59:43.185191 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-07 00:59:43.185196 | orchestrator | Wednesday 07 January 2026 00:51:55 +0000 (0:00:00.982) 0:03:19.455 ***** 2026-01-07 00:59:43.185204 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.185210 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.185215 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.185220 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185240 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185245 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185249 | orchestrator | 2026-01-07 00:59:43.185254 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-07 00:59:43.185259 | orchestrator | Wednesday 07 January 2026 00:51:56 +0000 (0:00:00.975) 0:03:20.430 ***** 2026-01-07 00:59:43.185266 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-07 00:59:43.185273 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-07 00:59:43.185277 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185281 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-07 00:59:43.185287 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-07 00:59:43.185290 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185293 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-07 00:59:43.185297 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-07 00:59:43.185300 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185303 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185306 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185309 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185312 | orchestrator | 2026-01-07 00:59:43.185315 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-07 00:59:43.185318 | orchestrator | Wednesday 07 January 2026 00:51:57 +0000 (0:00:01.099) 0:03:21.530 ***** 2026-01-07 00:59:43.185321 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185324 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185327 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185330 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185333 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185336 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185340 | orchestrator | 2026-01-07 00:59:43.185365 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-07 00:59:43.185368 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:00.626) 0:03:22.156 ***** 2026-01-07 00:59:43.185371 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185374 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185377 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185380 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185383 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185386 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185389 | orchestrator | 2026-01-07 00:59:43.185395 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:59:43.185398 | orchestrator | Wednesday 07 January 2026 00:51:59 +0000 (0:00:00.858) 0:03:23.015 ***** 2026-01-07 00:59:43.185401 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185404 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185407 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185410 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185413 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185416 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185419 | orchestrator | 2026-01-07 00:59:43.185422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:59:43.185425 | orchestrator | Wednesday 07 January 2026 00:51:59 +0000 (0:00:00.749) 0:03:23.765 ***** 2026-01-07 00:59:43.185429 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185432 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185435 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185438 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185445 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185450 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185458 | orchestrator | 2026-01-07 00:59:43.185464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:59:43.185484 | orchestrator | Wednesday 07 January 2026 00:52:00 +0000 (0:00:00.967) 0:03:24.732 ***** 2026-01-07 00:59:43.185490 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185494 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185500 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185504 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185510 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185514 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185519 | orchestrator | 2026-01-07 00:59:43.185524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:59:43.185529 | orchestrator | Wednesday 07 January 2026 00:52:01 +0000 (0:00:00.624) 0:03:25.357 ***** 2026-01-07 00:59:43.185534 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.185539 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.185544 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.185549 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185552 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185555 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185558 | orchestrator | 2026-01-07 00:59:43.185561 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:59:43.185564 | orchestrator | Wednesday 07 January 2026 00:52:02 +0000 (0:00:00.968) 0:03:26.325 ***** 2026-01-07 00:59:43.185568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.185571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.185574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.185577 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185580 | orchestrator | 2026-01-07 00:59:43.185583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:59:43.185586 | orchestrator | Wednesday 07 January 2026 00:52:02 +0000 (0:00:00.422) 0:03:26.748 ***** 2026-01-07 00:59:43.185589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.185592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.185595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.185599 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185602 | orchestrator | 2026-01-07 00:59:43.185605 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:59:43.185608 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:00.403) 0:03:27.151 ***** 2026-01-07 00:59:43.185611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.185614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.185617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.185620 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185623 | orchestrator | 2026-01-07 00:59:43.185626 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:59:43.185630 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:00.389) 0:03:27.541 ***** 2026-01-07 00:59:43.185633 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.185636 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.185639 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.185642 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185645 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185648 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185651 | orchestrator | 2026-01-07 00:59:43.185654 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:59:43.185657 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:00.648) 0:03:28.190 ***** 2026-01-07 00:59:43.185664 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:59:43.185667 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:59:43.185670 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-07 00:59:43.185673 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:59:43.185676 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185679 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-07 00:59:43.185682 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185685 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-07 00:59:43.185689 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185692 | orchestrator | 2026-01-07 00:59:43.185695 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-07 00:59:43.185698 | orchestrator | Wednesday 07 January 2026 00:52:06 +0000 (0:00:02.509) 0:03:30.699 ***** 2026-01-07 00:59:43.185701 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.185704 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.185707 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.185710 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.185713 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.185716 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.185719 | orchestrator | 2026-01-07 00:59:43.185724 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.185728 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:03.271) 0:03:33.970 ***** 2026-01-07 00:59:43.185731 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.185734 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.185737 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.185740 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.185743 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.185746 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.185749 | orchestrator | 2026-01-07 00:59:43.185752 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:59:43.185755 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:01.124) 0:03:35.095 ***** 2026-01-07 00:59:43.185758 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185761 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185764 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185768 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.185771 | orchestrator | 2026-01-07 00:59:43.185774 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:59:43.185788 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:01.306) 0:03:36.402 ***** 2026-01-07 00:59:43.185792 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.185795 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.185799 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.185802 | orchestrator | 2026-01-07 00:59:43.185805 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:59:43.185808 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:00.344) 0:03:36.747 ***** 2026-01-07 00:59:43.185811 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.185814 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.185817 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.185820 | orchestrator | 2026-01-07 00:59:43.185825 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:59:43.185830 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:01.578) 0:03:38.325 ***** 2026-01-07 00:59:43.185835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:59:43.185840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:59:43.185846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:59:43.185851 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185856 | orchestrator | 2026-01-07 00:59:43.185865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:59:43.185871 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.661) 0:03:38.987 ***** 2026-01-07 00:59:43.185876 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.185882 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.185888 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.185893 | orchestrator | 2026-01-07 00:59:43.185899 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:59:43.185902 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.391) 0:03:39.379 ***** 2026-01-07 00:59:43.185905 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.185908 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.185911 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.185914 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.185917 | orchestrator | 2026-01-07 00:59:43.185920 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:59:43.185924 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:01.040) 0:03:40.419 ***** 2026-01-07 00:59:43.185927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.185930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.185933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.185936 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185939 | orchestrator | 2026-01-07 00:59:43.185942 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:59:43.185945 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.407) 0:03:40.826 ***** 2026-01-07 00:59:43.185948 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185951 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185954 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185957 | orchestrator | 2026-01-07 00:59:43.185961 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:59:43.185964 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.361) 0:03:41.187 ***** 2026-01-07 00:59:43.185967 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185970 | orchestrator | 2026-01-07 00:59:43.185973 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:59:43.185976 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.223) 0:03:41.410 ***** 2026-01-07 00:59:43.185979 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.185982 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.185985 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.185988 | orchestrator | 2026-01-07 00:59:43.185991 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:59:43.185994 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.297) 0:03:41.707 ***** 2026-01-07 00:59:43.185997 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186000 | orchestrator | 2026-01-07 00:59:43.186004 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:59:43.186007 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.221) 0:03:41.929 ***** 2026-01-07 00:59:43.186010 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186034 | orchestrator | 2026-01-07 00:59:43.186037 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:59:43.186040 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.201) 0:03:42.130 ***** 2026-01-07 00:59:43.186043 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186046 | orchestrator | 2026-01-07 00:59:43.186054 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:59:43.186057 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.109) 0:03:42.240 ***** 2026-01-07 00:59:43.186060 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186066 | orchestrator | 2026-01-07 00:59:43.186069 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:59:43.186072 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.761) 0:03:43.002 ***** 2026-01-07 00:59:43.186075 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186078 | orchestrator | 2026-01-07 00:59:43.186082 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:59:43.186085 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.217) 0:03:43.220 ***** 2026-01-07 00:59:43.186088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.186091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.186094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.186097 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186100 | orchestrator | 2026-01-07 00:59:43.186104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:59:43.186119 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.419) 0:03:43.639 ***** 2026-01-07 00:59:43.186123 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186126 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.186129 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.186132 | orchestrator | 2026-01-07 00:59:43.186135 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:59:43.186138 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.354) 0:03:43.994 ***** 2026-01-07 00:59:43.186141 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186144 | orchestrator | 2026-01-07 00:59:43.186147 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:59:43.186151 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.220) 0:03:44.215 ***** 2026-01-07 00:59:43.186154 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186157 | orchestrator | 2026-01-07 00:59:43.186160 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:59:43.186163 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.235) 0:03:44.451 ***** 2026-01-07 00:59:43.186166 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186169 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186172 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.186178 | orchestrator | 2026-01-07 00:59:43.186181 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:59:43.186185 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:01.087) 0:03:45.538 ***** 2026-01-07 00:59:43.186188 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.186191 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.186194 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.186197 | orchestrator | 2026-01-07 00:59:43.186200 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:59:43.186203 | orchestrator | Wednesday 07 January 2026 00:52:22 +0000 (0:00:00.359) 0:03:45.898 ***** 2026-01-07 00:59:43.186206 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.186209 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.186212 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.186215 | orchestrator | 2026-01-07 00:59:43.186218 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:59:43.186222 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:01.094) 0:03:46.992 ***** 2026-01-07 00:59:43.186225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.186228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.186231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.186234 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186237 | orchestrator | 2026-01-07 00:59:43.186242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:59:43.186246 | orchestrator | Wednesday 07 January 2026 00:52:24 +0000 (0:00:00.920) 0:03:47.912 ***** 2026-01-07 00:59:43.186249 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.186252 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.186255 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.186258 | orchestrator | 2026-01-07 00:59:43.186261 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:59:43.186264 | orchestrator | Wednesday 07 January 2026 00:52:24 +0000 (0:00:00.600) 0:03:48.513 ***** 2026-01-07 00:59:43.186267 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186270 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186273 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.186280 | orchestrator | 2026-01-07 00:59:43.186283 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:59:43.186286 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:00.973) 0:03:49.486 ***** 2026-01-07 00:59:43.186289 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.186292 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.186295 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.186298 | orchestrator | 2026-01-07 00:59:43.186301 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:59:43.186304 | orchestrator | Wednesday 07 January 2026 00:52:26 +0000 (0:00:00.622) 0:03:50.108 ***** 2026-01-07 00:59:43.186308 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.186311 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.186314 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.186317 | orchestrator | 2026-01-07 00:59:43.186320 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:59:43.186331 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:01.158) 0:03:51.267 ***** 2026-01-07 00:59:43.186335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.186338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.186349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.186353 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186356 | orchestrator | 2026-01-07 00:59:43.186359 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:59:43.186362 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:00.582) 0:03:51.849 ***** 2026-01-07 00:59:43.186365 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.186368 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.186371 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.186374 | orchestrator | 2026-01-07 00:59:43.186377 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-07 00:59:43.186381 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:00.322) 0:03:52.172 ***** 2026-01-07 00:59:43.186384 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186387 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.186390 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.186393 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186396 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186409 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186413 | orchestrator | 2026-01-07 00:59:43.186416 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:59:43.186419 | orchestrator | Wednesday 07 January 2026 00:52:29 +0000 (0:00:00.809) 0:03:52.981 ***** 2026-01-07 00:59:43.186422 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.186425 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.186428 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.186431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.186437 | orchestrator | 2026-01-07 00:59:43.186440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:59:43.186443 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:00.812) 0:03:53.794 ***** 2026-01-07 00:59:43.186447 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186450 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186453 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186456 | orchestrator | 2026-01-07 00:59:43.186459 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:59:43.186462 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:00.576) 0:03:54.370 ***** 2026-01-07 00:59:43.186465 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.186468 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.186471 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.186474 | orchestrator | 2026-01-07 00:59:43.186477 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:59:43.186480 | orchestrator | Wednesday 07 January 2026 00:52:31 +0000 (0:00:01.384) 0:03:55.754 ***** 2026-01-07 00:59:43.186483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:59:43.186487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:59:43.186490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:59:43.186493 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186496 | orchestrator | 2026-01-07 00:59:43.186499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:59:43.186502 | orchestrator | Wednesday 07 January 2026 00:52:32 +0000 (0:00:00.686) 0:03:56.440 ***** 2026-01-07 00:59:43.186505 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186508 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186511 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186514 | orchestrator | 2026-01-07 00:59:43.186517 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-07 00:59:43.186520 | orchestrator | 2026-01-07 00:59:43.186524 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.186527 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:00.574) 0:03:57.015 ***** 2026-01-07 00:59:43.186530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.186533 | orchestrator | 2026-01-07 00:59:43.186536 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.186540 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:00.779) 0:03:57.795 ***** 2026-01-07 00:59:43.186545 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.186550 | orchestrator | 2026-01-07 00:59:43.186556 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.186561 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:00.505) 0:03:58.300 ***** 2026-01-07 00:59:43.186566 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186571 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186576 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186581 | orchestrator | 2026-01-07 00:59:43.186586 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.186591 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:01.058) 0:03:59.359 ***** 2026-01-07 00:59:43.186596 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186601 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186606 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186611 | orchestrator | 2026-01-07 00:59:43.186616 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.186621 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.424) 0:03:59.783 ***** 2026-01-07 00:59:43.186631 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186636 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186642 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186647 | orchestrator | 2026-01-07 00:59:43.186652 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.186660 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.509) 0:04:00.293 ***** 2026-01-07 00:59:43.186666 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186672 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186677 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186683 | orchestrator | 2026-01-07 00:59:43.186688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.186694 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.407) 0:04:00.701 ***** 2026-01-07 00:59:43.186699 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186704 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186707 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186710 | orchestrator | 2026-01-07 00:59:43.186713 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.186716 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:01.389) 0:04:02.091 ***** 2026-01-07 00:59:43.186719 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186722 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186725 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186728 | orchestrator | 2026-01-07 00:59:43.186732 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.186735 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:00.411) 0:04:02.502 ***** 2026-01-07 00:59:43.186752 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186756 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186759 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186762 | orchestrator | 2026-01-07 00:59:43.186765 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.186768 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:00.272) 0:04:02.774 ***** 2026-01-07 00:59:43.186771 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186774 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186777 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186781 | orchestrator | 2026-01-07 00:59:43.186784 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.186787 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:00.844) 0:04:03.619 ***** 2026-01-07 00:59:43.186790 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186793 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186796 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186799 | orchestrator | 2026-01-07 00:59:43.186802 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.186805 | orchestrator | Wednesday 07 January 2026 00:52:40 +0000 (0:00:01.094) 0:04:04.714 ***** 2026-01-07 00:59:43.186808 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186811 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186814 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186818 | orchestrator | 2026-01-07 00:59:43.186821 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.186824 | orchestrator | Wednesday 07 January 2026 00:52:41 +0000 (0:00:00.362) 0:04:05.077 ***** 2026-01-07 00:59:43.186827 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186830 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186833 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186836 | orchestrator | 2026-01-07 00:59:43.186839 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.186842 | orchestrator | Wednesday 07 January 2026 00:52:41 +0000 (0:00:00.429) 0:04:05.506 ***** 2026-01-07 00:59:43.186845 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186848 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186855 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186858 | orchestrator | 2026-01-07 00:59:43.186861 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.186864 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:00.347) 0:04:05.854 ***** 2026-01-07 00:59:43.186868 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186871 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186874 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186877 | orchestrator | 2026-01-07 00:59:43.186880 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.186883 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:00.546) 0:04:06.401 ***** 2026-01-07 00:59:43.186886 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186889 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186892 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186895 | orchestrator | 2026-01-07 00:59:43.186898 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.186902 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:00.306) 0:04:06.707 ***** 2026-01-07 00:59:43.186905 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186908 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186911 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186914 | orchestrator | 2026-01-07 00:59:43.186917 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.186920 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:00.329) 0:04:07.036 ***** 2026-01-07 00:59:43.186923 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.186926 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.186929 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.186932 | orchestrator | 2026-01-07 00:59:43.186935 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.186938 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:00.325) 0:04:07.362 ***** 2026-01-07 00:59:43.186941 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186945 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186948 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186951 | orchestrator | 2026-01-07 00:59:43.186954 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.186957 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:00.334) 0:04:07.696 ***** 2026-01-07 00:59:43.186960 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186963 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186966 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186969 | orchestrator | 2026-01-07 00:59:43.186972 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.186978 | orchestrator | Wednesday 07 January 2026 00:52:44 +0000 (0:00:00.600) 0:04:08.296 ***** 2026-01-07 00:59:43.186981 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.186984 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.186987 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.186990 | orchestrator | 2026-01-07 00:59:43.186993 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:59:43.186996 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:00.578) 0:04:08.875 ***** 2026-01-07 00:59:43.186999 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187002 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187005 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187008 | orchestrator | 2026-01-07 00:59:43.187012 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-07 00:59:43.187015 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:00.340) 0:04:09.216 ***** 2026-01-07 00:59:43.187018 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187021 | orchestrator | 2026-01-07 00:59:43.187024 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-07 00:59:43.187030 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:00.856) 0:04:10.072 ***** 2026-01-07 00:59:43.187033 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187036 | orchestrator | 2026-01-07 00:59:43.187048 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-07 00:59:43.187052 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:00.144) 0:04:10.216 ***** 2026-01-07 00:59:43.187055 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:59:43.187058 | orchestrator | 2026-01-07 00:59:43.187061 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-07 00:59:43.187064 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:01.021) 0:04:11.238 ***** 2026-01-07 00:59:43.187067 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187070 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187073 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187076 | orchestrator | 2026-01-07 00:59:43.187079 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-07 00:59:43.187083 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:00.455) 0:04:11.693 ***** 2026-01-07 00:59:43.187086 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187089 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187092 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187095 | orchestrator | 2026-01-07 00:59:43.187098 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-07 00:59:43.187101 | orchestrator | Wednesday 07 January 2026 00:52:48 +0000 (0:00:00.659) 0:04:12.352 ***** 2026-01-07 00:59:43.187104 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187107 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187110 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187113 | orchestrator | 2026-01-07 00:59:43.187116 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-07 00:59:43.187119 | orchestrator | Wednesday 07 January 2026 00:52:49 +0000 (0:00:01.358) 0:04:13.710 ***** 2026-01-07 00:59:43.187123 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187126 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187129 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187132 | orchestrator | 2026-01-07 00:59:43.187135 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-07 00:59:43.187138 | orchestrator | Wednesday 07 January 2026 00:52:50 +0000 (0:00:00.860) 0:04:14.571 ***** 2026-01-07 00:59:43.187141 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187144 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187147 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187150 | orchestrator | 2026-01-07 00:59:43.187154 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-07 00:59:43.187157 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:01.002) 0:04:15.574 ***** 2026-01-07 00:59:43.187160 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187163 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187166 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187169 | orchestrator | 2026-01-07 00:59:43.187172 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-07 00:59:43.187175 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:00.839) 0:04:16.413 ***** 2026-01-07 00:59:43.187178 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187181 | orchestrator | 2026-01-07 00:59:43.187185 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-07 00:59:43.187188 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:01.815) 0:04:18.229 ***** 2026-01-07 00:59:43.187191 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187194 | orchestrator | 2026-01-07 00:59:43.187197 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-07 00:59:43.187200 | orchestrator | Wednesday 07 January 2026 00:52:55 +0000 (0:00:00.714) 0:04:18.943 ***** 2026-01-07 00:59:43.187205 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.187208 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.187212 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.187215 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:59:43.187218 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:59:43.187221 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:59:43.187224 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:59:43.187227 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-07 00:59:43.187230 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:59:43.187233 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-07 00:59:43.187236 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-07 00:59:43.187239 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-07 00:59:43.187242 | orchestrator | 2026-01-07 00:59:43.187246 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-07 00:59:43.187252 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:03.541) 0:04:22.484 ***** 2026-01-07 00:59:43.187255 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187258 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187261 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187264 | orchestrator | 2026-01-07 00:59:43.187267 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-07 00:59:43.187272 | orchestrator | Wednesday 07 January 2026 00:53:00 +0000 (0:00:01.336) 0:04:23.821 ***** 2026-01-07 00:59:43.187278 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187284 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187291 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187297 | orchestrator | 2026-01-07 00:59:43.187302 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-07 00:59:43.187307 | orchestrator | Wednesday 07 January 2026 00:53:00 +0000 (0:00:00.934) 0:04:24.755 ***** 2026-01-07 00:59:43.187312 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187317 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187322 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187327 | orchestrator | 2026-01-07 00:59:43.187332 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-07 00:59:43.187337 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:00.897) 0:04:25.653 ***** 2026-01-07 00:59:43.187353 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187375 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187380 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187385 | orchestrator | 2026-01-07 00:59:43.187391 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-07 00:59:43.187395 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:02.522) 0:04:28.176 ***** 2026-01-07 00:59:43.187398 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187401 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187404 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187407 | orchestrator | 2026-01-07 00:59:43.187410 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-07 00:59:43.187413 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:01.910) 0:04:30.086 ***** 2026-01-07 00:59:43.187416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187419 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187422 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187425 | orchestrator | 2026-01-07 00:59:43.187428 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-07 00:59:43.187431 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.393) 0:04:30.479 ***** 2026-01-07 00:59:43.187434 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187441 | orchestrator | 2026-01-07 00:59:43.187444 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-07 00:59:43.187447 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:01.007) 0:04:31.486 ***** 2026-01-07 00:59:43.187450 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187453 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187456 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187459 | orchestrator | 2026-01-07 00:59:43.187462 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-07 00:59:43.187465 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.419) 0:04:31.906 ***** 2026-01-07 00:59:43.187468 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187471 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187474 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187477 | orchestrator | 2026-01-07 00:59:43.187480 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-07 00:59:43.187483 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.446) 0:04:32.352 ***** 2026-01-07 00:59:43.187486 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187490 | orchestrator | 2026-01-07 00:59:43.187493 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-07 00:59:43.187496 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.852) 0:04:33.204 ***** 2026-01-07 00:59:43.187499 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187502 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187505 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187508 | orchestrator | 2026-01-07 00:59:43.187511 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-07 00:59:43.187514 | orchestrator | Wednesday 07 January 2026 00:53:11 +0000 (0:00:01.894) 0:04:35.099 ***** 2026-01-07 00:59:43.187518 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187521 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187524 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187527 | orchestrator | 2026-01-07 00:59:43.187530 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-07 00:59:43.187533 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:01.036) 0:04:36.135 ***** 2026-01-07 00:59:43.187536 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187539 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187542 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187545 | orchestrator | 2026-01-07 00:59:43.187548 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-07 00:59:43.187551 | orchestrator | Wednesday 07 January 2026 00:53:14 +0000 (0:00:01.687) 0:04:37.823 ***** 2026-01-07 00:59:43.187554 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.187557 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.187560 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.187563 | orchestrator | 2026-01-07 00:59:43.187566 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-07 00:59:43.187569 | orchestrator | Wednesday 07 January 2026 00:53:16 +0000 (0:00:02.383) 0:04:40.207 ***** 2026-01-07 00:59:43.187572 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187575 | orchestrator | 2026-01-07 00:59:43.187578 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-07 00:59:43.187584 | orchestrator | Wednesday 07 January 2026 00:53:16 +0000 (0:00:00.498) 0:04:40.705 ***** 2026-01-07 00:59:43.187587 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187590 | orchestrator | 2026-01-07 00:59:43.187593 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-07 00:59:43.187596 | orchestrator | Wednesday 07 January 2026 00:53:18 +0000 (0:00:01.404) 0:04:42.109 ***** 2026-01-07 00:59:43.187601 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187604 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187607 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187610 | orchestrator | 2026-01-07 00:59:43.187613 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-07 00:59:43.187616 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:10.687) 0:04:52.797 ***** 2026-01-07 00:59:43.187619 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187623 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187626 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187629 | orchestrator | 2026-01-07 00:59:43.187632 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-07 00:59:43.187635 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:00.537) 0:04:53.334 ***** 2026-01-07 00:59:43.187649 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:59:43.187654 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:59:43.187658 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-07 00:59:43.187662 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-07 00:59:43.187665 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-07 00:59:43.187669 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__7c763b382cbda39124ef29901f8d358dd1978fea'}])  2026-01-07 00:59:43.187673 | orchestrator | 2026-01-07 00:59:43.187676 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.187679 | orchestrator | Wednesday 07 January 2026 00:53:42 +0000 (0:00:12.752) 0:05:06.086 ***** 2026-01-07 00:59:43.187682 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187685 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187688 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187691 | orchestrator | 2026-01-07 00:59:43.187694 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:59:43.187697 | orchestrator | Wednesday 07 January 2026 00:53:42 +0000 (0:00:00.536) 0:05:06.623 ***** 2026-01-07 00:59:43.187705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187708 | orchestrator | 2026-01-07 00:59:43.187711 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:59:43.187714 | orchestrator | Wednesday 07 January 2026 00:53:43 +0000 (0:00:00.926) 0:05:07.549 ***** 2026-01-07 00:59:43.187717 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187720 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187724 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187727 | orchestrator | 2026-01-07 00:59:43.187730 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:59:43.187735 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:00.334) 0:05:07.884 ***** 2026-01-07 00:59:43.187738 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187741 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187744 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187747 | orchestrator | 2026-01-07 00:59:43.187750 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:59:43.187753 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:00.354) 0:05:08.239 ***** 2026-01-07 00:59:43.187757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:59:43.187760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:59:43.187763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:59:43.187766 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187769 | orchestrator | 2026-01-07 00:59:43.187772 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:59:43.187775 | orchestrator | Wednesday 07 January 2026 00:53:45 +0000 (0:00:01.188) 0:05:09.428 ***** 2026-01-07 00:59:43.187778 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187781 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187784 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187787 | orchestrator | 2026-01-07 00:59:43.187791 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-07 00:59:43.187794 | orchestrator | 2026-01-07 00:59:43.187806 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.187809 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:00.559) 0:05:09.988 ***** 2026-01-07 00:59:43.187813 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187816 | orchestrator | 2026-01-07 00:59:43.187819 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.187822 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:00.494) 0:05:10.482 ***** 2026-01-07 00:59:43.187825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.187829 | orchestrator | 2026-01-07 00:59:43.187832 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.187835 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:00.745) 0:05:11.228 ***** 2026-01-07 00:59:43.187838 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187841 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187844 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187847 | orchestrator | 2026-01-07 00:59:43.187850 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.187853 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.813) 0:05:12.042 ***** 2026-01-07 00:59:43.187856 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187859 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187862 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187866 | orchestrator | 2026-01-07 00:59:43.187869 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.187874 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.320) 0:05:12.362 ***** 2026-01-07 00:59:43.187877 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187880 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187886 | orchestrator | 2026-01-07 00:59:43.187889 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.187892 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.580) 0:05:12.943 ***** 2026-01-07 00:59:43.187896 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187899 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187902 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187905 | orchestrator | 2026-01-07 00:59:43.187908 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.187911 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.319) 0:05:13.262 ***** 2026-01-07 00:59:43.187914 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187917 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187920 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187923 | orchestrator | 2026-01-07 00:59:43.187926 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.187930 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.709) 0:05:13.972 ***** 2026-01-07 00:59:43.187933 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187936 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187939 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187942 | orchestrator | 2026-01-07 00:59:43.187945 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.187948 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.324) 0:05:14.296 ***** 2026-01-07 00:59:43.187951 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.187954 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.187957 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.187960 | orchestrator | 2026-01-07 00:59:43.187963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.187966 | orchestrator | Wednesday 07 January 2026 00:53:51 +0000 (0:00:00.538) 0:05:14.834 ***** 2026-01-07 00:59:43.187970 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187973 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187976 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187979 | orchestrator | 2026-01-07 00:59:43.187982 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.187985 | orchestrator | Wednesday 07 January 2026 00:53:51 +0000 (0:00:00.731) 0:05:15.566 ***** 2026-01-07 00:59:43.187988 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.187991 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.187994 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.187997 | orchestrator | 2026-01-07 00:59:43.188000 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.188003 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:00.805) 0:05:16.372 ***** 2026-01-07 00:59:43.188008 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188011 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188014 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188017 | orchestrator | 2026-01-07 00:59:43.188020 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.188024 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:00.323) 0:05:16.695 ***** 2026-01-07 00:59:43.188027 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188030 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188033 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188036 | orchestrator | 2026-01-07 00:59:43.188039 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.188042 | orchestrator | Wednesday 07 January 2026 00:53:53 +0000 (0:00:00.656) 0:05:17.352 ***** 2026-01-07 00:59:43.188047 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188050 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188053 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188056 | orchestrator | 2026-01-07 00:59:43.188059 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.188063 | orchestrator | Wednesday 07 January 2026 00:53:53 +0000 (0:00:00.317) 0:05:17.669 ***** 2026-01-07 00:59:43.188066 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188069 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188080 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188084 | orchestrator | 2026-01-07 00:59:43.188087 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.188090 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:00.365) 0:05:18.034 ***** 2026-01-07 00:59:43.188093 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188096 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188099 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188103 | orchestrator | 2026-01-07 00:59:43.188106 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.188109 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:00.332) 0:05:18.366 ***** 2026-01-07 00:59:43.188112 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188115 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188118 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188121 | orchestrator | 2026-01-07 00:59:43.188124 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.188127 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:00.312) 0:05:18.678 ***** 2026-01-07 00:59:43.188130 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188133 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188136 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188139 | orchestrator | 2026-01-07 00:59:43.188143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.188146 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:00.623) 0:05:19.301 ***** 2026-01-07 00:59:43.188149 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188152 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188155 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188158 | orchestrator | 2026-01-07 00:59:43.188161 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.188164 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:00.327) 0:05:19.629 ***** 2026-01-07 00:59:43.188167 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188170 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188174 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188177 | orchestrator | 2026-01-07 00:59:43.188180 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.188183 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.335) 0:05:19.965 ***** 2026-01-07 00:59:43.188186 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188189 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188192 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188195 | orchestrator | 2026-01-07 00:59:43.188198 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:59:43.188201 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.769) 0:05:20.735 ***** 2026-01-07 00:59:43.188205 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:59:43.188208 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.188211 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.188214 | orchestrator | 2026-01-07 00:59:43.188217 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-07 00:59:43.188220 | orchestrator | Wednesday 07 January 2026 00:53:57 +0000 (0:00:00.623) 0:05:21.358 ***** 2026-01-07 00:59:43.188225 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.188228 | orchestrator | 2026-01-07 00:59:43.188232 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-07 00:59:43.188235 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:00.505) 0:05:21.863 ***** 2026-01-07 00:59:43.188238 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188241 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188244 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188247 | orchestrator | 2026-01-07 00:59:43.188250 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-07 00:59:43.188253 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:00.697) 0:05:22.560 ***** 2026-01-07 00:59:43.188256 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188259 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188262 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188265 | orchestrator | 2026-01-07 00:59:43.188268 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-07 00:59:43.188271 | orchestrator | Wednesday 07 January 2026 00:53:59 +0000 (0:00:00.542) 0:05:23.103 ***** 2026-01-07 00:59:43.188275 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.188278 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.188281 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.188286 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-07 00:59:43.188289 | orchestrator | 2026-01-07 00:59:43.188292 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-07 00:59:43.188295 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:10.675) 0:05:33.778 ***** 2026-01-07 00:59:43.188299 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188302 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188305 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188309 | orchestrator | 2026-01-07 00:59:43.188315 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-07 00:59:43.188322 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:00.359) 0:05:34.138 ***** 2026-01-07 00:59:43.188329 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:59:43.188334 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:59:43.188339 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:59:43.188369 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.188375 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.188380 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.188385 | orchestrator | 2026-01-07 00:59:43.188405 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:59:43.188409 | orchestrator | Wednesday 07 January 2026 00:54:12 +0000 (0:00:02.541) 0:05:36.680 ***** 2026-01-07 00:59:43.188412 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:59:43.188415 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:59:43.188418 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:59:43.188421 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:59:43.188424 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:59:43.188428 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:59:43.188431 | orchestrator | 2026-01-07 00:59:43.188434 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-07 00:59:43.188437 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:01.294) 0:05:37.974 ***** 2026-01-07 00:59:43.188440 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188443 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188447 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188450 | orchestrator | 2026-01-07 00:59:43.188456 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-07 00:59:43.188459 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:01.145) 0:05:39.120 ***** 2026-01-07 00:59:43.188462 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188466 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188469 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188472 | orchestrator | 2026-01-07 00:59:43.188475 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-07 00:59:43.188478 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:00.357) 0:05:39.477 ***** 2026-01-07 00:59:43.188481 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188484 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188487 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188490 | orchestrator | 2026-01-07 00:59:43.188494 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-07 00:59:43.188497 | orchestrator | Wednesday 07 January 2026 00:54:16 +0000 (0:00:00.288) 0:05:39.766 ***** 2026-01-07 00:59:43.188500 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.188503 | orchestrator | 2026-01-07 00:59:43.188506 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-07 00:59:43.188509 | orchestrator | Wednesday 07 January 2026 00:54:16 +0000 (0:00:00.834) 0:05:40.601 ***** 2026-01-07 00:59:43.188512 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188516 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188519 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188522 | orchestrator | 2026-01-07 00:59:43.188525 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-07 00:59:43.188528 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.317) 0:05:40.919 ***** 2026-01-07 00:59:43.188531 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188535 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188538 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.188541 | orchestrator | 2026-01-07 00:59:43.188544 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-07 00:59:43.188547 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.318) 0:05:41.237 ***** 2026-01-07 00:59:43.188550 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.188553 | orchestrator | 2026-01-07 00:59:43.188557 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-07 00:59:43.188560 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:00.788) 0:05:42.026 ***** 2026-01-07 00:59:43.188563 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188566 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188569 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188572 | orchestrator | 2026-01-07 00:59:43.188575 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-07 00:59:43.188578 | orchestrator | Wednesday 07 January 2026 00:54:19 +0000 (0:00:01.452) 0:05:43.479 ***** 2026-01-07 00:59:43.188582 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188585 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188588 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188591 | orchestrator | 2026-01-07 00:59:43.188594 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-07 00:59:43.188597 | orchestrator | Wednesday 07 January 2026 00:54:21 +0000 (0:00:01.386) 0:05:44.866 ***** 2026-01-07 00:59:43.188600 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188603 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188607 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188610 | orchestrator | 2026-01-07 00:59:43.188615 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-07 00:59:43.188619 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:02.145) 0:05:47.012 ***** 2026-01-07 00:59:43.188624 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188628 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188631 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188634 | orchestrator | 2026-01-07 00:59:43.188637 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-07 00:59:43.188640 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:02.195) 0:05:49.207 ***** 2026-01-07 00:59:43.188643 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188646 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.188649 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-07 00:59:43.188653 | orchestrator | 2026-01-07 00:59:43.188656 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-07 00:59:43.188659 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:00.416) 0:05:49.624 ***** 2026-01-07 00:59:43.188662 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-07 00:59:43.188674 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-07 00:59:43.188678 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-07 00:59:43.188681 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-07 00:59:43.188684 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-07 00:59:43.188687 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.188691 | orchestrator | 2026-01-07 00:59:43.188694 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-07 00:59:43.188697 | orchestrator | Wednesday 07 January 2026 00:54:55 +0000 (0:00:30.121) 0:06:19.746 ***** 2026-01-07 00:59:43.188700 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.188703 | orchestrator | 2026-01-07 00:59:43.188706 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-07 00:59:43.188709 | orchestrator | Wednesday 07 January 2026 00:54:57 +0000 (0:00:01.494) 0:06:21.241 ***** 2026-01-07 00:59:43.188713 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188716 | orchestrator | 2026-01-07 00:59:43.188719 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-07 00:59:43.188722 | orchestrator | Wednesday 07 January 2026 00:54:57 +0000 (0:00:00.291) 0:06:21.532 ***** 2026-01-07 00:59:43.188725 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188728 | orchestrator | 2026-01-07 00:59:43.188731 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-07 00:59:43.188734 | orchestrator | Wednesday 07 January 2026 00:54:57 +0000 (0:00:00.150) 0:06:21.683 ***** 2026-01-07 00:59:43.188738 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-07 00:59:43.188741 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-07 00:59:43.188744 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-07 00:59:43.188747 | orchestrator | 2026-01-07 00:59:43.188750 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-07 00:59:43.188753 | orchestrator | Wednesday 07 January 2026 00:55:05 +0000 (0:00:07.666) 0:06:29.350 ***** 2026-01-07 00:59:43.188756 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-07 00:59:43.188760 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-07 00:59:43.188763 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-07 00:59:43.188766 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-07 00:59:43.188769 | orchestrator | 2026-01-07 00:59:43.188772 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.188777 | orchestrator | Wednesday 07 January 2026 00:55:10 +0000 (0:00:05.000) 0:06:34.350 ***** 2026-01-07 00:59:43.188780 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188783 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188787 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188790 | orchestrator | 2026-01-07 00:59:43.188793 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:59:43.188796 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:00.631) 0:06:34.981 ***** 2026-01-07 00:59:43.188799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.188802 | orchestrator | 2026-01-07 00:59:43.188806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:59:43.188809 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:00.782) 0:06:35.764 ***** 2026-01-07 00:59:43.188812 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188815 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188818 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188821 | orchestrator | 2026-01-07 00:59:43.188824 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:59:43.188828 | orchestrator | Wednesday 07 January 2026 00:55:12 +0000 (0:00:00.378) 0:06:36.142 ***** 2026-01-07 00:59:43.188831 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.188834 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.188837 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.188840 | orchestrator | 2026-01-07 00:59:43.188843 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:59:43.188846 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:01.211) 0:06:37.353 ***** 2026-01-07 00:59:43.188850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:59:43.188853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:59:43.188856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:59:43.188859 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.188862 | orchestrator | 2026-01-07 00:59:43.188866 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:59:43.188869 | orchestrator | Wednesday 07 January 2026 00:55:14 +0000 (0:00:00.836) 0:06:38.190 ***** 2026-01-07 00:59:43.188872 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.188875 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.188878 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.188881 | orchestrator | 2026-01-07 00:59:43.188884 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-07 00:59:43.188888 | orchestrator | 2026-01-07 00:59:43.188891 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.188894 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.827) 0:06:39.017 ***** 2026-01-07 00:59:43.188897 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.188900 | orchestrator | 2026-01-07 00:59:43.188913 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.188917 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.524) 0:06:39.542 ***** 2026-01-07 00:59:43.188920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.188923 | orchestrator | 2026-01-07 00:59:43.188926 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.188929 | orchestrator | Wednesday 07 January 2026 00:55:16 +0000 (0:00:00.715) 0:06:40.258 ***** 2026-01-07 00:59:43.188932 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.188935 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.188939 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.188944 | orchestrator | 2026-01-07 00:59:43.188947 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.188950 | orchestrator | Wednesday 07 January 2026 00:55:16 +0000 (0:00:00.286) 0:06:40.545 ***** 2026-01-07 00:59:43.188953 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.188957 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.188960 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.188963 | orchestrator | 2026-01-07 00:59:43.188966 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.188969 | orchestrator | Wednesday 07 January 2026 00:55:17 +0000 (0:00:00.732) 0:06:41.278 ***** 2026-01-07 00:59:43.188972 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.188975 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.188979 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.188982 | orchestrator | 2026-01-07 00:59:43.188985 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.188988 | orchestrator | Wednesday 07 January 2026 00:55:18 +0000 (0:00:00.812) 0:06:42.090 ***** 2026-01-07 00:59:43.188991 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.188994 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.188997 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189000 | orchestrator | 2026-01-07 00:59:43.189003 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.189006 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.996) 0:06:43.086 ***** 2026-01-07 00:59:43.189010 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189013 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189016 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189019 | orchestrator | 2026-01-07 00:59:43.189022 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.189025 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.296) 0:06:43.382 ***** 2026-01-07 00:59:43.189028 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189031 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189035 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189038 | orchestrator | 2026-01-07 00:59:43.189041 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.189044 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.308) 0:06:43.690 ***** 2026-01-07 00:59:43.189047 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189050 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189053 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189056 | orchestrator | 2026-01-07 00:59:43.189059 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.189063 | orchestrator | Wednesday 07 January 2026 00:55:20 +0000 (0:00:00.311) 0:06:44.002 ***** 2026-01-07 00:59:43.189066 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189069 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189072 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189075 | orchestrator | 2026-01-07 00:59:43.189078 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.189081 | orchestrator | Wednesday 07 January 2026 00:55:21 +0000 (0:00:01.063) 0:06:45.066 ***** 2026-01-07 00:59:43.189085 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189088 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189091 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189094 | orchestrator | 2026-01-07 00:59:43.189097 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.189100 | orchestrator | Wednesday 07 January 2026 00:55:22 +0000 (0:00:00.708) 0:06:45.774 ***** 2026-01-07 00:59:43.189103 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189106 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189109 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189113 | orchestrator | 2026-01-07 00:59:43.189116 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.189135 | orchestrator | Wednesday 07 January 2026 00:55:22 +0000 (0:00:00.350) 0:06:46.125 ***** 2026-01-07 00:59:43.189139 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189142 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189147 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189150 | orchestrator | 2026-01-07 00:59:43.189153 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.189156 | orchestrator | Wednesday 07 January 2026 00:55:22 +0000 (0:00:00.295) 0:06:46.420 ***** 2026-01-07 00:59:43.189159 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189162 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189165 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189168 | orchestrator | 2026-01-07 00:59:43.189171 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.189174 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.615) 0:06:47.036 ***** 2026-01-07 00:59:43.189177 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189181 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189184 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189187 | orchestrator | 2026-01-07 00:59:43.189190 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.189193 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.355) 0:06:47.391 ***** 2026-01-07 00:59:43.189196 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189199 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189202 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189205 | orchestrator | 2026-01-07 00:59:43.189208 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.189213 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.349) 0:06:47.740 ***** 2026-01-07 00:59:43.189216 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189219 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189222 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189225 | orchestrator | 2026-01-07 00:59:43.189228 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.189232 | orchestrator | Wednesday 07 January 2026 00:55:24 +0000 (0:00:00.298) 0:06:48.039 ***** 2026-01-07 00:59:43.189235 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189240 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189247 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189254 | orchestrator | 2026-01-07 00:59:43.189258 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.189263 | orchestrator | Wednesday 07 January 2026 00:55:24 +0000 (0:00:00.607) 0:06:48.647 ***** 2026-01-07 00:59:43.189268 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189273 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189278 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189282 | orchestrator | 2026-01-07 00:59:43.189287 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.189292 | orchestrator | Wednesday 07 January 2026 00:55:25 +0000 (0:00:00.342) 0:06:48.989 ***** 2026-01-07 00:59:43.189297 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189302 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189307 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189312 | orchestrator | 2026-01-07 00:59:43.189317 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.189322 | orchestrator | Wednesday 07 January 2026 00:55:25 +0000 (0:00:00.340) 0:06:49.330 ***** 2026-01-07 00:59:43.189328 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189333 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189338 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189354 | orchestrator | 2026-01-07 00:59:43.189359 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-07 00:59:43.189365 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:00.769) 0:06:50.100 ***** 2026-01-07 00:59:43.189376 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189381 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189385 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189388 | orchestrator | 2026-01-07 00:59:43.189391 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:59:43.189394 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:00.337) 0:06:50.437 ***** 2026-01-07 00:59:43.189398 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:59:43.189401 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:59:43.189404 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:59:43.189407 | orchestrator | 2026-01-07 00:59:43.189410 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-07 00:59:43.189413 | orchestrator | Wednesday 07 January 2026 00:55:27 +0000 (0:00:00.615) 0:06:51.052 ***** 2026-01-07 00:59:43.189416 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.189419 | orchestrator | 2026-01-07 00:59:43.189422 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-07 00:59:43.189425 | orchestrator | Wednesday 07 January 2026 00:55:27 +0000 (0:00:00.580) 0:06:51.633 ***** 2026-01-07 00:59:43.189428 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189431 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189435 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189438 | orchestrator | 2026-01-07 00:59:43.189441 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-07 00:59:43.189444 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:00.608) 0:06:52.242 ***** 2026-01-07 00:59:43.189447 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189450 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189453 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189456 | orchestrator | 2026-01-07 00:59:43.189459 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-07 00:59:43.189462 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:00.290) 0:06:52.533 ***** 2026-01-07 00:59:43.189465 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189468 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189471 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189474 | orchestrator | 2026-01-07 00:59:43.189478 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-07 00:59:43.189483 | orchestrator | Wednesday 07 January 2026 00:55:29 +0000 (0:00:00.668) 0:06:53.201 ***** 2026-01-07 00:59:43.189486 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189489 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189492 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189495 | orchestrator | 2026-01-07 00:59:43.189498 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-07 00:59:43.189501 | orchestrator | Wednesday 07 January 2026 00:55:29 +0000 (0:00:00.345) 0:06:53.547 ***** 2026-01-07 00:59:43.189504 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:59:43.189508 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:59:43.189511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:59:43.189514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:59:43.189517 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:59:43.189520 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:59:43.189527 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:59:43.189533 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:59:43.189536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:59:43.189539 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:59:43.189542 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:59:43.189545 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:59:43.189548 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:59:43.189552 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:59:43.189555 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:59:43.189558 | orchestrator | 2026-01-07 00:59:43.189561 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-07 00:59:43.189564 | orchestrator | Wednesday 07 January 2026 00:55:32 +0000 (0:00:03.154) 0:06:56.701 ***** 2026-01-07 00:59:43.189567 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189570 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189573 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189576 | orchestrator | 2026-01-07 00:59:43.189579 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-07 00:59:43.189582 | orchestrator | Wednesday 07 January 2026 00:55:33 +0000 (0:00:00.298) 0:06:57.000 ***** 2026-01-07 00:59:43.189586 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.189589 | orchestrator | 2026-01-07 00:59:43.189592 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-07 00:59:43.189595 | orchestrator | Wednesday 07 January 2026 00:55:33 +0000 (0:00:00.504) 0:06:57.504 ***** 2026-01-07 00:59:43.189598 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:59:43.189601 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:59:43.189604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-07 00:59:43.189607 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-07 00:59:43.189610 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:59:43.189614 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-07 00:59:43.189617 | orchestrator | 2026-01-07 00:59:43.189620 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-07 00:59:43.189623 | orchestrator | Wednesday 07 January 2026 00:55:35 +0000 (0:00:01.584) 0:06:59.089 ***** 2026-01-07 00:59:43.189626 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.189629 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.189632 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.189635 | orchestrator | 2026-01-07 00:59:43.189638 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:59:43.189641 | orchestrator | Wednesday 07 January 2026 00:55:37 +0000 (0:00:02.320) 0:07:01.410 ***** 2026-01-07 00:59:43.189644 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:59:43.189648 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:59:43.189651 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:59:43.189654 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.189657 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.189660 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.189663 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:59:43.189666 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:59:43.189669 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.189674 | orchestrator | 2026-01-07 00:59:43.189677 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-07 00:59:43.189681 | orchestrator | Wednesday 07 January 2026 00:55:39 +0000 (0:00:01.454) 0:07:02.865 ***** 2026-01-07 00:59:43.189686 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.189692 | orchestrator | 2026-01-07 00:59:43.189699 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-07 00:59:43.189707 | orchestrator | Wednesday 07 January 2026 00:55:41 +0000 (0:00:02.293) 0:07:05.158 ***** 2026-01-07 00:59:43.189712 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.189718 | orchestrator | 2026-01-07 00:59:43.189722 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-07 00:59:43.189727 | orchestrator | Wednesday 07 January 2026 00:55:42 +0000 (0:00:00.780) 0:07:05.939 ***** 2026-01-07 00:59:43.189732 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc', 'data_vg': 'ceph-ef56a04c-76f1-5b5f-91f5-fd927a7d00fc'}) 2026-01-07 00:59:43.189738 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd296ce-f103-5a39-9243-23354e346d82', 'data_vg': 'ceph-bbd296ce-f103-5a39-9243-23354e346d82'}) 2026-01-07 00:59:43.189742 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4e6008a2-36a5-590e-8013-ca4c2218d3f7', 'data_vg': 'ceph-4e6008a2-36a5-590e-8013-ca4c2218d3f7'}) 2026-01-07 00:59:43.189750 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-35426297-011a-51b2-a2d6-4f3d2a544c0e', 'data_vg': 'ceph-35426297-011a-51b2-a2d6-4f3d2a544c0e'}) 2026-01-07 00:59:43.189755 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5711b466-e770-5253-91be-c96275afda22', 'data_vg': 'ceph-5711b466-e770-5253-91be-c96275afda22'}) 2026-01-07 00:59:43.189760 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16bf28f1-ae52-5ff4-8907-41e0bcdec1af', 'data_vg': 'ceph-16bf28f1-ae52-5ff4-8907-41e0bcdec1af'}) 2026-01-07 00:59:43.189765 | orchestrator | 2026-01-07 00:59:43.189769 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-07 00:59:43.189774 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:42.269) 0:07:48.208 ***** 2026-01-07 00:59:43.189779 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189783 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189788 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.189793 | orchestrator | 2026-01-07 00:59:43.189797 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-07 00:59:43.189801 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.293) 0:07:48.502 ***** 2026-01-07 00:59:43.189806 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.189811 | orchestrator | 2026-01-07 00:59:43.189816 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-07 00:59:43.189821 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.817) 0:07:49.319 ***** 2026-01-07 00:59:43.189826 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189830 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189836 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189840 | orchestrator | 2026-01-07 00:59:43.189845 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-07 00:59:43.189851 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.648) 0:07:49.968 ***** 2026-01-07 00:59:43.189856 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.189861 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.189866 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.189871 | orchestrator | 2026-01-07 00:59:43.189876 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-07 00:59:43.189884 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:02.799) 0:07:52.767 ***** 2026-01-07 00:59:43.189895 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.189900 | orchestrator | 2026-01-07 00:59:43.189905 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-07 00:59:43.189910 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.768) 0:07:53.536 ***** 2026-01-07 00:59:43.189915 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.189919 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.189923 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.189928 | orchestrator | 2026-01-07 00:59:43.189933 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-07 00:59:43.189938 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:01.210) 0:07:54.746 ***** 2026-01-07 00:59:43.189943 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.189948 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.189954 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.189959 | orchestrator | 2026-01-07 00:59:43.189965 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-07 00:59:43.189969 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:01.183) 0:07:55.930 ***** 2026-01-07 00:59:43.189972 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.189975 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.189978 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.189981 | orchestrator | 2026-01-07 00:59:43.189984 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-07 00:59:43.189988 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:01.889) 0:07:57.819 ***** 2026-01-07 00:59:43.189991 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.189994 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.189997 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190000 | orchestrator | 2026-01-07 00:59:43.190003 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-07 00:59:43.190006 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.577) 0:07:58.396 ***** 2026-01-07 00:59:43.190009 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190042 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190046 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190049 | orchestrator | 2026-01-07 00:59:43.190054 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-07 00:59:43.190057 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.318) 0:07:58.715 ***** 2026-01-07 00:59:43.190060 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:59:43.190064 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-07 00:59:43.190067 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-07 00:59:43.190070 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-07 00:59:43.190073 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-07 00:59:43.190076 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-07 00:59:43.190079 | orchestrator | 2026-01-07 00:59:43.190082 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-07 00:59:43.190085 | orchestrator | Wednesday 07 January 2026 00:56:36 +0000 (0:00:01.102) 0:07:59.817 ***** 2026-01-07 00:59:43.190088 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-07 00:59:43.190092 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-07 00:59:43.190095 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:59:43.190098 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:59:43.190101 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-07 00:59:43.190104 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-07 00:59:43.190107 | orchestrator | 2026-01-07 00:59:43.190115 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-07 00:59:43.190118 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:02.249) 0:08:02.067 ***** 2026-01-07 00:59:43.190125 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-07 00:59:43.190128 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-07 00:59:43.190131 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:59:43.190134 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:59:43.190137 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-07 00:59:43.190141 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-07 00:59:43.190144 | orchestrator | 2026-01-07 00:59:43.190147 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-07 00:59:43.190150 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:04.342) 0:08:06.409 ***** 2026-01-07 00:59:43.190153 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190156 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.190162 | orchestrator | 2026-01-07 00:59:43.190165 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-07 00:59:43.190169 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:02.269) 0:08:08.679 ***** 2026-01-07 00:59:43.190172 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190175 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190178 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-07 00:59:43.190181 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.190184 | orchestrator | 2026-01-07 00:59:43.190187 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-07 00:59:43.190190 | orchestrator | Wednesday 07 January 2026 00:56:57 +0000 (0:00:12.593) 0:08:21.272 ***** 2026-01-07 00:59:43.190194 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190197 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190200 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190203 | orchestrator | 2026-01-07 00:59:43.190206 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.190209 | orchestrator | Wednesday 07 January 2026 00:56:58 +0000 (0:00:01.060) 0:08:22.333 ***** 2026-01-07 00:59:43.190212 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190215 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190218 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190221 | orchestrator | 2026-01-07 00:59:43.190224 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:59:43.190228 | orchestrator | Wednesday 07 January 2026 00:56:58 +0000 (0:00:00.321) 0:08:22.655 ***** 2026-01-07 00:59:43.190231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.190234 | orchestrator | 2026-01-07 00:59:43.190237 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:59:43.190240 | orchestrator | Wednesday 07 January 2026 00:56:59 +0000 (0:00:00.856) 0:08:23.512 ***** 2026-01-07 00:59:43.190243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.190246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.190249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.190252 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190255 | orchestrator | 2026-01-07 00:59:43.190259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:59:43.190262 | orchestrator | Wednesday 07 January 2026 00:57:00 +0000 (0:00:00.384) 0:08:23.896 ***** 2026-01-07 00:59:43.190265 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190268 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190271 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190274 | orchestrator | 2026-01-07 00:59:43.190277 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:59:43.190280 | orchestrator | Wednesday 07 January 2026 00:57:00 +0000 (0:00:00.332) 0:08:24.229 ***** 2026-01-07 00:59:43.190286 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190289 | orchestrator | 2026-01-07 00:59:43.190292 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:59:43.190295 | orchestrator | Wednesday 07 January 2026 00:57:00 +0000 (0:00:00.243) 0:08:24.473 ***** 2026-01-07 00:59:43.190298 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190301 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190304 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190308 | orchestrator | 2026-01-07 00:59:43.190311 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:59:43.190315 | orchestrator | Wednesday 07 January 2026 00:57:01 +0000 (0:00:00.306) 0:08:24.779 ***** 2026-01-07 00:59:43.190319 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190322 | orchestrator | 2026-01-07 00:59:43.190325 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:59:43.190328 | orchestrator | Wednesday 07 January 2026 00:57:01 +0000 (0:00:00.204) 0:08:24.984 ***** 2026-01-07 00:59:43.190331 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190334 | orchestrator | 2026-01-07 00:59:43.190337 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:59:43.190340 | orchestrator | Wednesday 07 January 2026 00:57:01 +0000 (0:00:00.240) 0:08:25.224 ***** 2026-01-07 00:59:43.190367 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190370 | orchestrator | 2026-01-07 00:59:43.190373 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:59:43.190376 | orchestrator | Wednesday 07 January 2026 00:57:01 +0000 (0:00:00.129) 0:08:25.353 ***** 2026-01-07 00:59:43.190379 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190382 | orchestrator | 2026-01-07 00:59:43.190427 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:59:43.190433 | orchestrator | Wednesday 07 January 2026 00:57:02 +0000 (0:00:00.788) 0:08:26.142 ***** 2026-01-07 00:59:43.190441 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190446 | orchestrator | 2026-01-07 00:59:43.190452 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:59:43.190456 | orchestrator | Wednesday 07 January 2026 00:57:02 +0000 (0:00:00.246) 0:08:26.389 ***** 2026-01-07 00:59:43.190461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.190466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.190470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.190475 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190479 | orchestrator | 2026-01-07 00:59:43.190484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:59:43.190489 | orchestrator | Wednesday 07 January 2026 00:57:03 +0000 (0:00:00.406) 0:08:26.795 ***** 2026-01-07 00:59:43.190493 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190498 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190502 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190507 | orchestrator | 2026-01-07 00:59:43.190512 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:59:43.190516 | orchestrator | Wednesday 07 January 2026 00:57:03 +0000 (0:00:00.331) 0:08:27.127 ***** 2026-01-07 00:59:43.190521 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190525 | orchestrator | 2026-01-07 00:59:43.190530 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:59:43.190535 | orchestrator | Wednesday 07 January 2026 00:57:03 +0000 (0:00:00.241) 0:08:27.369 ***** 2026-01-07 00:59:43.190540 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190545 | orchestrator | 2026-01-07 00:59:43.190549 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-07 00:59:43.190554 | orchestrator | 2026-01-07 00:59:43.190558 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.190568 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:00.950) 0:08:28.320 ***** 2026-01-07 00:59:43.190573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.190580 | orchestrator | 2026-01-07 00:59:43.190584 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.190590 | orchestrator | Wednesday 07 January 2026 00:57:05 +0000 (0:00:01.178) 0:08:29.498 ***** 2026-01-07 00:59:43.190594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.190599 | orchestrator | 2026-01-07 00:59:43.190604 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.190609 | orchestrator | Wednesday 07 January 2026 00:57:06 +0000 (0:00:01.231) 0:08:30.730 ***** 2026-01-07 00:59:43.190614 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190619 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190623 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190628 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.190634 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.190639 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.190644 | orchestrator | 2026-01-07 00:59:43.190649 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.190654 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:01.123) 0:08:31.854 ***** 2026-01-07 00:59:43.190660 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190665 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190669 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190672 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190675 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190678 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190681 | orchestrator | 2026-01-07 00:59:43.190684 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.190687 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.658) 0:08:32.513 ***** 2026-01-07 00:59:43.190690 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190693 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190697 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190700 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190703 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190706 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190709 | orchestrator | 2026-01-07 00:59:43.190712 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.190715 | orchestrator | Wednesday 07 January 2026 00:57:09 +0000 (0:00:00.938) 0:08:33.451 ***** 2026-01-07 00:59:43.190718 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190724 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190727 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190730 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190733 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190736 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190739 | orchestrator | 2026-01-07 00:59:43.190742 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.190745 | orchestrator | Wednesday 07 January 2026 00:57:10 +0000 (0:00:00.633) 0:08:34.084 ***** 2026-01-07 00:59:43.190748 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190751 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190755 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190758 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.190761 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.190764 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.190767 | orchestrator | 2026-01-07 00:59:43.190770 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.190776 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:01.156) 0:08:35.241 ***** 2026-01-07 00:59:43.190779 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190782 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190785 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190788 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190791 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190798 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190801 | orchestrator | 2026-01-07 00:59:43.190804 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.190807 | orchestrator | Wednesday 07 January 2026 00:57:12 +0000 (0:00:00.644) 0:08:35.886 ***** 2026-01-07 00:59:43.190810 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190813 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190816 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190819 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190823 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190826 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190829 | orchestrator | 2026-01-07 00:59:43.190832 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.190835 | orchestrator | Wednesday 07 January 2026 00:57:12 +0000 (0:00:00.866) 0:08:36.752 ***** 2026-01-07 00:59:43.190838 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190841 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190844 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190847 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.190850 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.190853 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.190857 | orchestrator | 2026-01-07 00:59:43.190860 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.190863 | orchestrator | Wednesday 07 January 2026 00:57:14 +0000 (0:00:01.022) 0:08:37.774 ***** 2026-01-07 00:59:43.190866 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190869 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190872 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190875 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.190878 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.190881 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.190884 | orchestrator | 2026-01-07 00:59:43.190887 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.190890 | orchestrator | Wednesday 07 January 2026 00:57:15 +0000 (0:00:01.309) 0:08:39.083 ***** 2026-01-07 00:59:43.190894 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190897 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190900 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190903 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190906 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190909 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190912 | orchestrator | 2026-01-07 00:59:43.190915 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.190918 | orchestrator | Wednesday 07 January 2026 00:57:15 +0000 (0:00:00.653) 0:08:39.737 ***** 2026-01-07 00:59:43.190921 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.190925 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.190928 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.190931 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.190934 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.190937 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.190940 | orchestrator | 2026-01-07 00:59:43.190943 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.190946 | orchestrator | Wednesday 07 January 2026 00:57:16 +0000 (0:00:00.874) 0:08:40.611 ***** 2026-01-07 00:59:43.190949 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190952 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190958 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190961 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190964 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190967 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190970 | orchestrator | 2026-01-07 00:59:43.190973 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.190976 | orchestrator | Wednesday 07 January 2026 00:57:17 +0000 (0:00:00.615) 0:08:41.226 ***** 2026-01-07 00:59:43.190980 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.190983 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.190986 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.190989 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.190992 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.190995 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.190998 | orchestrator | 2026-01-07 00:59:43.191001 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.191004 | orchestrator | Wednesday 07 January 2026 00:57:18 +0000 (0:00:00.818) 0:08:42.045 ***** 2026-01-07 00:59:43.191008 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191011 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191014 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191017 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.191020 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.191023 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.191026 | orchestrator | 2026-01-07 00:59:43.191029 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.191032 | orchestrator | Wednesday 07 January 2026 00:57:18 +0000 (0:00:00.586) 0:08:42.632 ***** 2026-01-07 00:59:43.191038 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191041 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191044 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191047 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.191050 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.191053 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.191056 | orchestrator | 2026-01-07 00:59:43.191059 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.191063 | orchestrator | Wednesday 07 January 2026 00:57:19 +0000 (0:00:00.825) 0:08:43.457 ***** 2026-01-07 00:59:43.191066 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191069 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191072 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191077 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:43.191085 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:43.191090 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:43.191095 | orchestrator | 2026-01-07 00:59:43.191100 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.191105 | orchestrator | Wednesday 07 January 2026 00:57:20 +0000 (0:00:00.614) 0:08:44.072 ***** 2026-01-07 00:59:43.191110 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191115 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191120 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191125 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191134 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.191139 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.191145 | orchestrator | 2026-01-07 00:59:43.191150 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.191155 | orchestrator | Wednesday 07 January 2026 00:57:21 +0000 (0:00:00.946) 0:08:45.018 ***** 2026-01-07 00:59:43.191161 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191164 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191167 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191170 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191173 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.191176 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.191179 | orchestrator | 2026-01-07 00:59:43.191188 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.191191 | orchestrator | Wednesday 07 January 2026 00:57:21 +0000 (0:00:00.618) 0:08:45.637 ***** 2026-01-07 00:59:43.191194 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191197 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191200 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191203 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191206 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.191209 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.191212 | orchestrator | 2026-01-07 00:59:43.191215 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-07 00:59:43.191218 | orchestrator | Wednesday 07 January 2026 00:57:23 +0000 (0:00:01.562) 0:08:47.200 ***** 2026-01-07 00:59:43.191221 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.191224 | orchestrator | 2026-01-07 00:59:43.191227 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-07 00:59:43.191231 | orchestrator | Wednesday 07 January 2026 00:57:27 +0000 (0:00:04.336) 0:08:51.536 ***** 2026-01-07 00:59:43.191234 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.191237 | orchestrator | 2026-01-07 00:59:43.191240 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-07 00:59:43.191243 | orchestrator | Wednesday 07 January 2026 00:57:29 +0000 (0:00:02.095) 0:08:53.631 ***** 2026-01-07 00:59:43.191246 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.191249 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.191252 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.191255 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191258 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.191261 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.191264 | orchestrator | 2026-01-07 00:59:43.191267 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-07 00:59:43.191271 | orchestrator | Wednesday 07 January 2026 00:57:31 +0000 (0:00:01.630) 0:08:55.262 ***** 2026-01-07 00:59:43.191274 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.191277 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.191280 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.191283 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.191286 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.191289 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.191292 | orchestrator | 2026-01-07 00:59:43.191295 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-07 00:59:43.191298 | orchestrator | Wednesday 07 January 2026 00:57:32 +0000 (0:00:01.043) 0:08:56.305 ***** 2026-01-07 00:59:43.191302 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.191306 | orchestrator | 2026-01-07 00:59:43.191309 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-07 00:59:43.191312 | orchestrator | Wednesday 07 January 2026 00:57:33 +0000 (0:00:01.310) 0:08:57.615 ***** 2026-01-07 00:59:43.191315 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.191318 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.191321 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.191324 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.191327 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.191330 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.191333 | orchestrator | 2026-01-07 00:59:43.191336 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-07 00:59:43.191339 | orchestrator | Wednesday 07 January 2026 00:57:35 +0000 (0:00:01.995) 0:08:59.611 ***** 2026-01-07 00:59:43.191391 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.191395 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.191398 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.191403 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.191406 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.191409 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.191412 | orchestrator | 2026-01-07 00:59:43.191416 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-07 00:59:43.191421 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:04.030) 0:09:03.641 ***** 2026-01-07 00:59:43.191424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:43.191428 | orchestrator | 2026-01-07 00:59:43.191431 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-07 00:59:43.191434 | orchestrator | Wednesday 07 January 2026 00:57:41 +0000 (0:00:01.331) 0:09:04.973 ***** 2026-01-07 00:59:43.191437 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191440 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191443 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191446 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191449 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.191452 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.191455 | orchestrator | 2026-01-07 00:59:43.191458 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-07 00:59:43.191462 | orchestrator | Wednesday 07 January 2026 00:57:42 +0000 (0:00:00.830) 0:09:05.804 ***** 2026-01-07 00:59:43.191465 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.191468 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.191471 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.191474 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:43.191480 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:43.191483 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:43.191486 | orchestrator | 2026-01-07 00:59:43.191489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-07 00:59:43.191492 | orchestrator | Wednesday 07 January 2026 00:57:44 +0000 (0:00:02.377) 0:09:08.182 ***** 2026-01-07 00:59:43.191495 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191498 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191502 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191505 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:43.191508 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:43.191511 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:43.191514 | orchestrator | 2026-01-07 00:59:43.191517 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-07 00:59:43.191520 | orchestrator | 2026-01-07 00:59:43.191523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.191526 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:01.112) 0:09:09.294 ***** 2026-01-07 00:59:43.191530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.191533 | orchestrator | 2026-01-07 00:59:43.191536 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.191539 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.497) 0:09:09.792 ***** 2026-01-07 00:59:43.191542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.191545 | orchestrator | 2026-01-07 00:59:43.191548 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.191551 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.772) 0:09:10.564 ***** 2026-01-07 00:59:43.191554 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191558 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191561 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191564 | orchestrator | 2026-01-07 00:59:43.191567 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.191572 | orchestrator | Wednesday 07 January 2026 00:57:47 +0000 (0:00:00.326) 0:09:10.890 ***** 2026-01-07 00:59:43.191575 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191578 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191581 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191584 | orchestrator | 2026-01-07 00:59:43.191588 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.191591 | orchestrator | Wednesday 07 January 2026 00:57:47 +0000 (0:00:00.710) 0:09:11.600 ***** 2026-01-07 00:59:43.191594 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191597 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191600 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191603 | orchestrator | 2026-01-07 00:59:43.191606 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.191609 | orchestrator | Wednesday 07 January 2026 00:57:48 +0000 (0:00:01.061) 0:09:12.662 ***** 2026-01-07 00:59:43.191612 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191615 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191618 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191621 | orchestrator | 2026-01-07 00:59:43.191625 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.191628 | orchestrator | Wednesday 07 January 2026 00:57:49 +0000 (0:00:00.736) 0:09:13.398 ***** 2026-01-07 00:59:43.191631 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191634 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191637 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191640 | orchestrator | 2026-01-07 00:59:43.191643 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.191646 | orchestrator | Wednesday 07 January 2026 00:57:49 +0000 (0:00:00.335) 0:09:13.734 ***** 2026-01-07 00:59:43.191649 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191652 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191655 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191659 | orchestrator | 2026-01-07 00:59:43.191662 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.191665 | orchestrator | Wednesday 07 January 2026 00:57:50 +0000 (0:00:00.307) 0:09:14.041 ***** 2026-01-07 00:59:43.191668 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191671 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191674 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191679 | orchestrator | 2026-01-07 00:59:43.191683 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.191688 | orchestrator | Wednesday 07 January 2026 00:57:50 +0000 (0:00:00.567) 0:09:14.609 ***** 2026-01-07 00:59:43.191693 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191700 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191705 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191710 | orchestrator | 2026-01-07 00:59:43.191714 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.191719 | orchestrator | Wednesday 07 January 2026 00:57:51 +0000 (0:00:00.797) 0:09:15.406 ***** 2026-01-07 00:59:43.191725 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191729 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191732 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191735 | orchestrator | 2026-01-07 00:59:43.191738 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.191741 | orchestrator | Wednesday 07 January 2026 00:57:52 +0000 (0:00:00.756) 0:09:16.162 ***** 2026-01-07 00:59:43.191744 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191747 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191750 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191753 | orchestrator | 2026-01-07 00:59:43.191756 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.191760 | orchestrator | Wednesday 07 January 2026 00:57:52 +0000 (0:00:00.316) 0:09:16.479 ***** 2026-01-07 00:59:43.191765 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191768 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191771 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191774 | orchestrator | 2026-01-07 00:59:43.191779 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.191783 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:00.576) 0:09:17.055 ***** 2026-01-07 00:59:43.191786 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191789 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191792 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191795 | orchestrator | 2026-01-07 00:59:43.191798 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.191801 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:00.318) 0:09:17.374 ***** 2026-01-07 00:59:43.191804 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191807 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191811 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191814 | orchestrator | 2026-01-07 00:59:43.191817 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.191820 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:00.352) 0:09:17.726 ***** 2026-01-07 00:59:43.191823 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191826 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191829 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191832 | orchestrator | 2026-01-07 00:59:43.191835 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.191838 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.313) 0:09:18.040 ***** 2026-01-07 00:59:43.191841 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191844 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191847 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191851 | orchestrator | 2026-01-07 00:59:43.191854 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.191857 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.580) 0:09:18.620 ***** 2026-01-07 00:59:43.191860 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191863 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191866 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191869 | orchestrator | 2026-01-07 00:59:43.191872 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.191875 | orchestrator | Wednesday 07 January 2026 00:57:55 +0000 (0:00:00.299) 0:09:18.919 ***** 2026-01-07 00:59:43.191878 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191881 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191885 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191888 | orchestrator | 2026-01-07 00:59:43.191891 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.191894 | orchestrator | Wednesday 07 January 2026 00:57:55 +0000 (0:00:00.341) 0:09:19.261 ***** 2026-01-07 00:59:43.191897 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191900 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191903 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191906 | orchestrator | 2026-01-07 00:59:43.191909 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.191912 | orchestrator | Wednesday 07 January 2026 00:57:55 +0000 (0:00:00.313) 0:09:19.575 ***** 2026-01-07 00:59:43.191916 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.191919 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.191922 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.191925 | orchestrator | 2026-01-07 00:59:43.191928 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-07 00:59:43.191931 | orchestrator | Wednesday 07 January 2026 00:57:56 +0000 (0:00:00.850) 0:09:20.426 ***** 2026-01-07 00:59:43.191934 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.191937 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.191943 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-07 00:59:43.191946 | orchestrator | 2026-01-07 00:59:43.191949 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-07 00:59:43.191952 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:00.437) 0:09:20.864 ***** 2026-01-07 00:59:43.191956 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.191959 | orchestrator | 2026-01-07 00:59:43.191962 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-07 00:59:43.191965 | orchestrator | Wednesday 07 January 2026 00:57:58 +0000 (0:00:01.810) 0:09:22.674 ***** 2026-01-07 00:59:43.191969 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-07 00:59:43.191973 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.191976 | orchestrator | 2026-01-07 00:59:43.191980 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-07 00:59:43.191984 | orchestrator | Wednesday 07 January 2026 00:57:59 +0000 (0:00:00.201) 0:09:22.875 ***** 2026-01-07 00:59:43.191989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:59:43.191995 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:59:43.191998 | orchestrator | 2026-01-07 00:59:43.192001 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-07 00:59:43.192004 | orchestrator | Wednesday 07 January 2026 00:58:08 +0000 (0:00:09.401) 0:09:32.277 ***** 2026-01-07 00:59:43.192007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:59:43.192011 | orchestrator | 2026-01-07 00:59:43.192015 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-07 00:59:43.192019 | orchestrator | Wednesday 07 January 2026 00:58:11 +0000 (0:00:03.284) 0:09:35.561 ***** 2026-01-07 00:59:43.192022 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192025 | orchestrator | 2026-01-07 00:59:43.192028 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-07 00:59:43.192031 | orchestrator | Wednesday 07 January 2026 00:58:12 +0000 (0:00:00.566) 0:09:36.128 ***** 2026-01-07 00:59:43.192034 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:59:43.192037 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:59:43.192040 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:59:43.192044 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-07 00:59:43.192047 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-07 00:59:43.192050 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-07 00:59:43.192053 | orchestrator | 2026-01-07 00:59:43.192056 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-07 00:59:43.192059 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:01.045) 0:09:37.173 ***** 2026-01-07 00:59:43.192062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.192065 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.192068 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.192074 | orchestrator | 2026-01-07 00:59:43.192077 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:59:43.192080 | orchestrator | Wednesday 07 January 2026 00:58:15 +0000 (0:00:02.226) 0:09:39.400 ***** 2026-01-07 00:59:43.192083 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:59:43.192086 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.192089 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192092 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:59:43.192095 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:59:43.192098 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192101 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:59:43.192105 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:59:43.192108 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192111 | orchestrator | 2026-01-07 00:59:43.192114 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-07 00:59:43.192117 | orchestrator | Wednesday 07 January 2026 00:58:17 +0000 (0:00:01.568) 0:09:40.969 ***** 2026-01-07 00:59:43.192120 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192123 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192126 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192129 | orchestrator | 2026-01-07 00:59:43.192132 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-07 00:59:43.192135 | orchestrator | Wednesday 07 January 2026 00:58:19 +0000 (0:00:02.674) 0:09:43.644 ***** 2026-01-07 00:59:43.192138 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192142 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192145 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192148 | orchestrator | 2026-01-07 00:59:43.192151 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-07 00:59:43.192154 | orchestrator | Wednesday 07 January 2026 00:58:20 +0000 (0:00:00.317) 0:09:43.961 ***** 2026-01-07 00:59:43.192157 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192160 | orchestrator | 2026-01-07 00:59:43.192163 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-07 00:59:43.192166 | orchestrator | Wednesday 07 January 2026 00:58:21 +0000 (0:00:00.878) 0:09:44.839 ***** 2026-01-07 00:59:43.192169 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192173 | orchestrator | 2026-01-07 00:59:43.192176 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-07 00:59:43.192179 | orchestrator | Wednesday 07 January 2026 00:58:21 +0000 (0:00:00.676) 0:09:45.515 ***** 2026-01-07 00:59:43.192182 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192185 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192189 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192193 | orchestrator | 2026-01-07 00:59:43.192196 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-07 00:59:43.192199 | orchestrator | Wednesday 07 January 2026 00:58:23 +0000 (0:00:01.490) 0:09:47.006 ***** 2026-01-07 00:59:43.192202 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192205 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192208 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192211 | orchestrator | 2026-01-07 00:59:43.192214 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-07 00:59:43.192217 | orchestrator | Wednesday 07 January 2026 00:58:24 +0000 (0:00:01.503) 0:09:48.510 ***** 2026-01-07 00:59:43.192220 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192223 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192227 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192230 | orchestrator | 2026-01-07 00:59:43.192233 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-07 00:59:43.192238 | orchestrator | Wednesday 07 January 2026 00:58:26 +0000 (0:00:01.856) 0:09:50.366 ***** 2026-01-07 00:59:43.192241 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192245 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192248 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192251 | orchestrator | 2026-01-07 00:59:43.192256 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-07 00:59:43.192259 | orchestrator | Wednesday 07 January 2026 00:58:28 +0000 (0:00:01.979) 0:09:52.346 ***** 2026-01-07 00:59:43.192262 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192265 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192268 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192271 | orchestrator | 2026-01-07 00:59:43.192275 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.192278 | orchestrator | Wednesday 07 January 2026 00:58:30 +0000 (0:00:01.554) 0:09:53.900 ***** 2026-01-07 00:59:43.192281 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192284 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192287 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192290 | orchestrator | 2026-01-07 00:59:43.192293 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:59:43.192296 | orchestrator | Wednesday 07 January 2026 00:58:30 +0000 (0:00:00.753) 0:09:54.653 ***** 2026-01-07 00:59:43.192299 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192302 | orchestrator | 2026-01-07 00:59:43.192306 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:59:43.192309 | orchestrator | Wednesday 07 January 2026 00:58:31 +0000 (0:00:01.061) 0:09:55.714 ***** 2026-01-07 00:59:43.192312 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192315 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192318 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192321 | orchestrator | 2026-01-07 00:59:43.192324 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:59:43.192327 | orchestrator | Wednesday 07 January 2026 00:58:32 +0000 (0:00:00.445) 0:09:56.160 ***** 2026-01-07 00:59:43.192330 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192333 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192337 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192340 | orchestrator | 2026-01-07 00:59:43.192368 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:59:43.192374 | orchestrator | Wednesday 07 January 2026 00:58:33 +0000 (0:00:01.043) 0:09:57.204 ***** 2026-01-07 00:59:43.192379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.192384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.192389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.192394 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192397 | orchestrator | 2026-01-07 00:59:43.192400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:59:43.192403 | orchestrator | Wednesday 07 January 2026 00:58:34 +0000 (0:00:00.877) 0:09:58.081 ***** 2026-01-07 00:59:43.192406 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192409 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192412 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192415 | orchestrator | 2026-01-07 00:59:43.192418 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 00:59:43.192421 | orchestrator | 2026-01-07 00:59:43.192425 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:59:43.192428 | orchestrator | Wednesday 07 January 2026 00:58:35 +0000 (0:00:00.801) 0:09:58.883 ***** 2026-01-07 00:59:43.192431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192434 | orchestrator | 2026-01-07 00:59:43.192440 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:59:43.192443 | orchestrator | Wednesday 07 January 2026 00:58:35 +0000 (0:00:00.539) 0:09:59.423 ***** 2026-01-07 00:59:43.192446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192449 | orchestrator | 2026-01-07 00:59:43.192452 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:59:43.192455 | orchestrator | Wednesday 07 January 2026 00:58:36 +0000 (0:00:00.724) 0:10:00.148 ***** 2026-01-07 00:59:43.192458 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192461 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192464 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192468 | orchestrator | 2026-01-07 00:59:43.192471 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:59:43.192474 | orchestrator | Wednesday 07 January 2026 00:58:36 +0000 (0:00:00.306) 0:10:00.454 ***** 2026-01-07 00:59:43.192477 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192480 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192483 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192486 | orchestrator | 2026-01-07 00:59:43.192491 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:59:43.192494 | orchestrator | Wednesday 07 January 2026 00:58:37 +0000 (0:00:00.644) 0:10:01.099 ***** 2026-01-07 00:59:43.192497 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192500 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192503 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192507 | orchestrator | 2026-01-07 00:59:43.192510 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:59:43.192513 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:00.970) 0:10:02.070 ***** 2026-01-07 00:59:43.192516 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192519 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192522 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192525 | orchestrator | 2026-01-07 00:59:43.192528 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:59:43.192531 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.761) 0:10:02.831 ***** 2026-01-07 00:59:43.192534 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192537 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192540 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192543 | orchestrator | 2026-01-07 00:59:43.192547 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:59:43.192552 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.315) 0:10:03.147 ***** 2026-01-07 00:59:43.192555 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192558 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192561 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192564 | orchestrator | 2026-01-07 00:59:43.192568 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:59:43.192574 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.291) 0:10:03.438 ***** 2026-01-07 00:59:43.192579 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192584 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192589 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192593 | orchestrator | 2026-01-07 00:59:43.192598 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:59:43.192602 | orchestrator | Wednesday 07 January 2026 00:58:40 +0000 (0:00:00.557) 0:10:03.996 ***** 2026-01-07 00:59:43.192606 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192611 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192616 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192622 | orchestrator | 2026-01-07 00:59:43.192627 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:59:43.192632 | orchestrator | Wednesday 07 January 2026 00:58:41 +0000 (0:00:00.819) 0:10:04.815 ***** 2026-01-07 00:59:43.192650 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192653 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192656 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192659 | orchestrator | 2026-01-07 00:59:43.192662 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:59:43.192665 | orchestrator | Wednesday 07 January 2026 00:58:41 +0000 (0:00:00.706) 0:10:05.521 ***** 2026-01-07 00:59:43.192669 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192672 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192675 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192678 | orchestrator | 2026-01-07 00:59:43.192681 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:59:43.192684 | orchestrator | Wednesday 07 January 2026 00:58:42 +0000 (0:00:00.306) 0:10:05.827 ***** 2026-01-07 00:59:43.192687 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192690 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192693 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192696 | orchestrator | 2026-01-07 00:59:43.192699 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:59:43.192702 | orchestrator | Wednesday 07 January 2026 00:58:42 +0000 (0:00:00.546) 0:10:06.373 ***** 2026-01-07 00:59:43.192705 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192708 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192711 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192714 | orchestrator | 2026-01-07 00:59:43.192718 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:59:43.192721 | orchestrator | Wednesday 07 January 2026 00:58:42 +0000 (0:00:00.344) 0:10:06.719 ***** 2026-01-07 00:59:43.192724 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192727 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192734 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192737 | orchestrator | 2026-01-07 00:59:43.192740 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:59:43.192743 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.340) 0:10:07.059 ***** 2026-01-07 00:59:43.192747 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192750 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192753 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192756 | orchestrator | 2026-01-07 00:59:43.192759 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:59:43.192762 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.330) 0:10:07.389 ***** 2026-01-07 00:59:43.192765 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192768 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192771 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192774 | orchestrator | 2026-01-07 00:59:43.192777 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:59:43.192780 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.570) 0:10:07.960 ***** 2026-01-07 00:59:43.192784 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192787 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192790 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192793 | orchestrator | 2026-01-07 00:59:43.192796 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:59:43.192799 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.315) 0:10:08.275 ***** 2026-01-07 00:59:43.192802 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192805 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192808 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192811 | orchestrator | 2026-01-07 00:59:43.192814 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:59:43.192819 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.305) 0:10:08.580 ***** 2026-01-07 00:59:43.192823 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192829 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192832 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192836 | orchestrator | 2026-01-07 00:59:43.192839 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:59:43.192842 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.331) 0:10:08.912 ***** 2026-01-07 00:59:43.192845 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.192848 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.192851 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.192854 | orchestrator | 2026-01-07 00:59:43.192861 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-07 00:59:43.192864 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.825) 0:10:09.737 ***** 2026-01-07 00:59:43.192868 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192871 | orchestrator | 2026-01-07 00:59:43.192874 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:59:43.192877 | orchestrator | Wednesday 07 January 2026 00:58:46 +0000 (0:00:00.593) 0:10:10.330 ***** 2026-01-07 00:59:43.192883 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.192886 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.192889 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.192892 | orchestrator | 2026-01-07 00:59:43.192896 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:59:43.192899 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:02.578) 0:10:12.909 ***** 2026-01-07 00:59:43.192902 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:59:43.192905 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:59:43.192908 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.192911 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:59:43.192914 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:59:43.192917 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:59:43.192920 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.192923 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:59:43.192926 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.192929 | orchestrator | 2026-01-07 00:59:43.192933 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-07 00:59:43.192936 | orchestrator | Wednesday 07 January 2026 00:58:50 +0000 (0:00:01.500) 0:10:14.410 ***** 2026-01-07 00:59:43.192939 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.192942 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.192945 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.192948 | orchestrator | 2026-01-07 00:59:43.192951 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-07 00:59:43.192954 | orchestrator | Wednesday 07 January 2026 00:58:50 +0000 (0:00:00.322) 0:10:14.732 ***** 2026-01-07 00:59:43.192957 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.192960 | orchestrator | 2026-01-07 00:59:43.192964 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-07 00:59:43.192967 | orchestrator | Wednesday 07 January 2026 00:58:51 +0000 (0:00:00.537) 0:10:15.269 ***** 2026-01-07 00:59:43.192970 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.192973 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.192976 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.192982 | orchestrator | 2026-01-07 00:59:43.192985 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-07 00:59:43.192988 | orchestrator | Wednesday 07 January 2026 00:58:52 +0000 (0:00:01.260) 0:10:16.529 ***** 2026-01-07 00:59:43.192991 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.192994 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:59:43.192998 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.193001 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:59:43.193004 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.193007 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:59:43.193010 | orchestrator | 2026-01-07 00:59:43.193013 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:59:43.193016 | orchestrator | Wednesday 07 January 2026 00:58:56 +0000 (0:00:04.151) 0:10:20.681 ***** 2026-01-07 00:59:43.193019 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.193022 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.193028 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.193031 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.193034 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:59:43.193037 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:59:43.193040 | orchestrator | 2026-01-07 00:59:43.193043 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:59:43.193046 | orchestrator | Wednesday 07 January 2026 00:58:59 +0000 (0:00:02.158) 0:10:22.840 ***** 2026-01-07 00:59:43.193049 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:59:43.193052 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.193055 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:59:43.193058 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.193062 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:59:43.193065 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.193068 | orchestrator | 2026-01-07 00:59:43.193071 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-07 00:59:43.193074 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:01.133) 0:10:23.973 ***** 2026-01-07 00:59:43.193079 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-07 00:59:43.193082 | orchestrator | 2026-01-07 00:59:43.193085 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-07 00:59:43.193088 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:00.232) 0:10:24.206 ***** 2026-01-07 00:59:43.193091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193109 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193112 | orchestrator | 2026-01-07 00:59:43.193115 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-07 00:59:43.193118 | orchestrator | Wednesday 07 January 2026 00:59:01 +0000 (0:00:01.135) 0:10:25.342 ***** 2026-01-07 00:59:43.193122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:59:43.193137 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193140 | orchestrator | 2026-01-07 00:59:43.193143 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-07 00:59:43.193146 | orchestrator | Wednesday 07 January 2026 00:59:02 +0000 (0:00:00.633) 0:10:25.975 ***** 2026-01-07 00:59:43.193150 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:59:43.193153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:59:43.193156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:59:43.193159 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:59:43.193162 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:59:43.193165 | orchestrator | 2026-01-07 00:59:43.193168 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-07 00:59:43.193171 | orchestrator | Wednesday 07 January 2026 00:59:30 +0000 (0:00:28.156) 0:10:54.132 ***** 2026-01-07 00:59:43.193174 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193178 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.193181 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.193184 | orchestrator | 2026-01-07 00:59:43.193187 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-07 00:59:43.193192 | orchestrator | Wednesday 07 January 2026 00:59:30 +0000 (0:00:00.344) 0:10:54.476 ***** 2026-01-07 00:59:43.193195 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193198 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.193201 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.193204 | orchestrator | 2026-01-07 00:59:43.193207 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-07 00:59:43.193210 | orchestrator | Wednesday 07 January 2026 00:59:31 +0000 (0:00:00.296) 0:10:54.773 ***** 2026-01-07 00:59:43.193213 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.193216 | orchestrator | 2026-01-07 00:59:43.193219 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-07 00:59:43.193223 | orchestrator | Wednesday 07 January 2026 00:59:31 +0000 (0:00:00.816) 0:10:55.589 ***** 2026-01-07 00:59:43.193228 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.193231 | orchestrator | 2026-01-07 00:59:43.193234 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-07 00:59:43.193237 | orchestrator | Wednesday 07 January 2026 00:59:32 +0000 (0:00:00.544) 0:10:56.133 ***** 2026-01-07 00:59:43.193242 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.193246 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.193249 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.193252 | orchestrator | 2026-01-07 00:59:43.193255 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-07 00:59:43.193258 | orchestrator | Wednesday 07 January 2026 00:59:33 +0000 (0:00:01.184) 0:10:57.318 ***** 2026-01-07 00:59:43.193261 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.193264 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.193267 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.193270 | orchestrator | 2026-01-07 00:59:43.193274 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-07 00:59:43.193277 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:01.323) 0:10:58.642 ***** 2026-01-07 00:59:43.193280 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:59:43.193283 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:59:43.193286 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:59:43.193289 | orchestrator | 2026-01-07 00:59:43.193292 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-07 00:59:43.193295 | orchestrator | Wednesday 07 January 2026 00:59:36 +0000 (0:00:01.836) 0:11:00.479 ***** 2026-01-07 00:59:43.193298 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.193301 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.193305 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:59:43.193308 | orchestrator | 2026-01-07 00:59:43.193311 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:59:43.193314 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:02.847) 0:11:03.326 ***** 2026-01-07 00:59:43.193317 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193320 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.193323 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.193326 | orchestrator | 2026-01-07 00:59:43.193329 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:59:43.193332 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:00.335) 0:11:03.661 ***** 2026-01-07 00:59:43.193336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:59:43.193339 | orchestrator | 2026-01-07 00:59:43.193351 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:59:43.193354 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.491) 0:11:04.152 ***** 2026-01-07 00:59:43.193357 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.193360 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.193363 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.193366 | orchestrator | 2026-01-07 00:59:43.193369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:59:43.193373 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.554) 0:11:04.707 ***** 2026-01-07 00:59:43.193376 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193379 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:59:43.193382 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:59:43.193385 | orchestrator | 2026-01-07 00:59:43.193388 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:59:43.193400 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:00.306) 0:11:05.013 ***** 2026-01-07 00:59:43.193404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:59:43.193407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:59:43.193410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:59:43.193413 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:59:43.193416 | orchestrator | 2026-01-07 00:59:43.193419 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:59:43.193422 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:00.603) 0:11:05.617 ***** 2026-01-07 00:59:43.193425 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:59:43.193428 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:59:43.193431 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:59:43.193434 | orchestrator | 2026-01-07 00:59:43.193437 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:59:43.193442 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-07 00:59:43.193446 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-07 00:59:43.193449 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-07 00:59:43.193452 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-07 00:59:43.193455 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-07 00:59:43.193458 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-07 00:59:43.193461 | orchestrator | 2026-01-07 00:59:43.193464 | orchestrator | 2026-01-07 00:59:43.193467 | orchestrator | 2026-01-07 00:59:43.193472 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:59:43.193475 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.274) 0:11:05.892 ***** 2026-01-07 00:59:43.193479 | orchestrator | =============================================================================== 2026-01-07 00:59:43.193482 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.64s 2026-01-07 00:59:43.193485 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.27s 2026-01-07 00:59:43.193488 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.12s 2026-01-07 00:59:43.193491 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.16s 2026-01-07 00:59:43.193494 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 12.75s 2026-01-07 00:59:43.193497 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-01-07 00:59:43.193500 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.69s 2026-01-07 00:59:43.193503 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.68s 2026-01-07 00:59:43.193506 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.40s 2026-01-07 00:59:43.193509 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.67s 2026-01-07 00:59:43.193512 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.12s 2026-01-07 00:59:43.193515 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2026-01-07 00:59:43.193518 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.79s 2026-01-07 00:59:43.193521 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.34s 2026-01-07 00:59:43.193527 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.34s 2026-01-07 00:59:43.193530 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.15s 2026-01-07 00:59:43.193533 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.03s 2026-01-07 00:59:43.193536 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.65s 2026-01-07 00:59:43.193539 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.54s 2026-01-07 00:59:43.193542 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.28s 2026-01-07 00:59:43.193545 | orchestrator | 2026-01-07 00:59:43 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:43.193548 | orchestrator | 2026-01-07 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:46.232936 | orchestrator | 2026-01-07 00:59:46 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 00:59:46.236431 | orchestrator | 2026-01-07 00:59:46 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:46.238756 | orchestrator | 2026-01-07 00:59:46 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:46.238805 | orchestrator | 2026-01-07 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:49.289665 | orchestrator | 2026-01-07 00:59:49 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 00:59:49.292733 | orchestrator | 2026-01-07 00:59:49 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:49.295138 | orchestrator | 2026-01-07 00:59:49 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:49.295197 | orchestrator | 2026-01-07 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:52.345632 | orchestrator | 2026-01-07 00:59:52 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 00:59:52.347564 | orchestrator | 2026-01-07 00:59:52 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:52.350583 | orchestrator | 2026-01-07 00:59:52 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:52.350677 | orchestrator | 2026-01-07 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:55.404123 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 00:59:55.405260 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:55.407264 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:55.407367 | orchestrator | 2026-01-07 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:58.458357 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 00:59:58.460475 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 00:59:58.462143 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 00:59:58.462211 | orchestrator | 2026-01-07 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:01.503147 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:01.504209 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:01.506278 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:01.506377 | orchestrator | 2026-01-07 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:04.556000 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:04.558003 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:04.559039 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:04.559094 | orchestrator | 2026-01-07 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:07.605445 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:07.607656 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:07.609548 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:07.609661 | orchestrator | 2026-01-07 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:10.657791 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:10.659440 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:10.661343 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:10.661390 | orchestrator | 2026-01-07 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:13.703253 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:13.704496 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:13.706530 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:13.706968 | orchestrator | 2026-01-07 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:16.762351 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:16.765355 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:16.768566 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:16.768713 | orchestrator | 2026-01-07 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:19.826313 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:19.827793 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:19.829747 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:19.829783 | orchestrator | 2026-01-07 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:22.881150 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:22.882568 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:22.884157 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:22.884382 | orchestrator | 2026-01-07 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:25.942238 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:25.945198 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:25.947260 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:25.947513 | orchestrator | 2026-01-07 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:29.004679 | orchestrator | 2026-01-07 01:00:29 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:29.006304 | orchestrator | 2026-01-07 01:00:29 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:29.009179 | orchestrator | 2026-01-07 01:00:29 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state STARTED 2026-01-07 01:00:29.009239 | orchestrator | 2026-01-07 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:32.058413 | orchestrator | 2026-01-07 01:00:32 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:32.059524 | orchestrator | 2026-01-07 01:00:32 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:32.061964 | orchestrator | 2026-01-07 01:00:32 | INFO  | Task 4ada237e-2237-4c14-b673-8980bdf9020d is in state SUCCESS 2026-01-07 01:00:32.063341 | orchestrator | 2026-01-07 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:32.064411 | orchestrator | 2026-01-07 01:00:32.064453 | orchestrator | 2026-01-07 01:00:32.064459 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:00:32.064464 | orchestrator | 2026-01-07 01:00:32.064469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:00:32.064475 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.273) 0:00:00.273 ***** 2026-01-07 01:00:32.064481 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:32.064489 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:32.064496 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:32.064503 | orchestrator | 2026-01-07 01:00:32.064510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:00:32.064517 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.293) 0:00:00.567 ***** 2026-01-07 01:00:32.064523 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-07 01:00:32.064531 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-07 01:00:32.064537 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-07 01:00:32.064543 | orchestrator | 2026-01-07 01:00:32.064548 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-07 01:00:32.064554 | orchestrator | 2026-01-07 01:00:32.064559 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 01:00:32.064570 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.420) 0:00:00.988 ***** 2026-01-07 01:00:32.064577 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:32.064583 | orchestrator | 2026-01-07 01:00:32.064589 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-07 01:00:32.064595 | orchestrator | Wednesday 07 January 2026 00:57:55 +0000 (0:00:00.524) 0:00:01.512 ***** 2026-01-07 01:00:32.064601 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 01:00:32.064607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 01:00:32.064629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 01:00:32.064635 | orchestrator | 2026-01-07 01:00:32.064641 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-07 01:00:32.064647 | orchestrator | Wednesday 07 January 2026 00:57:56 +0000 (0:00:00.688) 0:00:02.201 ***** 2026-01-07 01:00:32.064663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064732 | orchestrator | 2026-01-07 01:00:32.064736 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 01:00:32.064739 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:01.742) 0:00:03.944 ***** 2026-01-07 01:00:32.064743 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:32.064747 | orchestrator | 2026-01-07 01:00:32.064751 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-07 01:00:32.064755 | orchestrator | Wednesday 07 January 2026 00:57:58 +0000 (0:00:00.512) 0:00:04.457 ***** 2026-01-07 01:00:32.064764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.064804 | orchestrator | 2026-01-07 01:00:32.064808 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-07 01:00:32.064812 | orchestrator | Wednesday 07 January 2026 00:58:00 +0000 (0:00:02.302) 0:00:06.759 ***** 2026-01-07 01:00:32.064818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064827 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:32.064831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064850 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:32.064856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064861 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:32.064865 | orchestrator | 2026-01-07 01:00:32.064868 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-07 01:00:32.064872 | orchestrator | Wednesday 07 January 2026 00:58:01 +0000 (0:00:01.056) 0:00:07.816 ***** 2026-01-07 01:00:32.064876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064894 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:32.064901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064916 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:32.064923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 01:00:32.064934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 01:00:32.064959 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:32.064966 | orchestrator | 2026-01-07 01:00:32.064972 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-07 01:00:32.064979 | orchestrator | Wednesday 07 January 2026 00:58:02 +0000 (0:00:00.918) 0:00:08.735 ***** 2026-01-07 01:00:32.064985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.064998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.065005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.065021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065050 | orchestrator | 2026-01-07 01:00:32.065057 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-07 01:00:32.065062 | orchestrator | Wednesday 07 January 2026 00:58:05 +0000 (0:00:02.753) 0:00:11.489 ***** 2026-01-07 01:00:32.065066 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:32.065071 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065076 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:32.065080 | orchestrator | 2026-01-07 01:00:32.065085 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-07 01:00:32.065089 | orchestrator | Wednesday 07 January 2026 00:58:08 +0000 (0:00:03.148) 0:00:14.637 ***** 2026-01-07 01:00:32.065094 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065099 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:32.065103 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:32.065108 | orchestrator | 2026-01-07 01:00:32.065112 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-07 01:00:32.065117 | orchestrator | Wednesday 07 January 2026 00:58:10 +0000 (0:00:01.731) 0:00:16.369 ***** 2026-01-07 01:00:32.065121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.065132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.065137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 01:00:32.065144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 01:00:32.065166 | orchestrator | 2026-01-07 01:00:32.065171 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 01:00:32.065176 | orchestrator | Wednesday 07 January 2026 00:58:12 +0000 (0:00:02.357) 0:00:18.726 ***** 2026-01-07 01:00:32.065180 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:32.065185 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:32.065189 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:32.065194 | orchestrator | 2026-01-07 01:00:32.065198 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 01:00:32.065204 | orchestrator | Wednesday 07 January 2026 00:58:12 +0000 (0:00:00.284) 0:00:19.010 ***** 2026-01-07 01:00:32.065211 | orchestrator | 2026-01-07 01:00:32.065217 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 01:00:32.065223 | orchestrator | Wednesday 07 January 2026 00:58:12 +0000 (0:00:00.081) 0:00:19.092 ***** 2026-01-07 01:00:32.065230 | orchestrator | 2026-01-07 01:00:32.065255 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 01:00:32.065262 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:00.068) 0:00:19.161 ***** 2026-01-07 01:00:32.065269 | orchestrator | 2026-01-07 01:00:32.065275 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-07 01:00:32.065280 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:00.066) 0:00:19.228 ***** 2026-01-07 01:00:32.065284 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:32.065289 | orchestrator | 2026-01-07 01:00:32.065293 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-07 01:00:32.065298 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:00.228) 0:00:19.456 ***** 2026-01-07 01:00:32.065302 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:32.065307 | orchestrator | 2026-01-07 01:00:32.065312 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-07 01:00:32.065317 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:00.628) 0:00:20.085 ***** 2026-01-07 01:00:32.065321 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065326 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:32.065330 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:32.065338 | orchestrator | 2026-01-07 01:00:32.065343 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-07 01:00:32.065347 | orchestrator | Wednesday 07 January 2026 00:59:09 +0000 (0:00:55.274) 0:01:15.360 ***** 2026-01-07 01:00:32.065352 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065356 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:32.065365 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:32.065370 | orchestrator | 2026-01-07 01:00:32.065375 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 01:00:32.065379 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:01:12.386) 0:02:27.746 ***** 2026-01-07 01:00:32.065384 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:32.065389 | orchestrator | 2026-01-07 01:00:32.065394 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-07 01:00:32.065399 | orchestrator | Wednesday 07 January 2026 01:00:22 +0000 (0:00:00.735) 0:02:28.481 ***** 2026-01-07 01:00:32.065408 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:32.065417 | orchestrator | 2026-01-07 01:00:32.065421 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-07 01:00:32.065425 | orchestrator | Wednesday 07 January 2026 01:00:24 +0000 (0:00:02.058) 0:02:30.540 ***** 2026-01-07 01:00:32.065429 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:32.065432 | orchestrator | 2026-01-07 01:00:32.065436 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-07 01:00:32.065440 | orchestrator | Wednesday 07 January 2026 01:00:26 +0000 (0:00:02.102) 0:02:32.643 ***** 2026-01-07 01:00:32.065444 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065448 | orchestrator | 2026-01-07 01:00:32.065452 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-07 01:00:32.065455 | orchestrator | Wednesday 07 January 2026 01:00:29 +0000 (0:00:02.574) 0:02:35.218 ***** 2026-01-07 01:00:32.065459 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:32.065463 | orchestrator | 2026-01-07 01:00:32.065467 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:32.065474 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:00:32.065483 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 01:00:32.065493 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 01:00:32.065500 | orchestrator | 2026-01-07 01:00:32.065507 | orchestrator | 2026-01-07 01:00:32.065513 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:32.065524 | orchestrator | Wednesday 07 January 2026 01:00:31 +0000 (0:00:02.411) 0:02:37.629 ***** 2026-01-07 01:00:32.065531 | orchestrator | =============================================================================== 2026-01-07 01:00:32.065538 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.39s 2026-01-07 01:00:32.065545 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.27s 2026-01-07 01:00:32.065552 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.15s 2026-01-07 01:00:32.065558 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.75s 2026-01-07 01:00:32.065562 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.57s 2026-01-07 01:00:32.065566 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.41s 2026-01-07 01:00:32.065570 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.36s 2026-01-07 01:00:32.065573 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.30s 2026-01-07 01:00:32.065577 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.10s 2026-01-07 01:00:32.065581 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.06s 2026-01-07 01:00:32.065585 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-01-07 01:00:32.065589 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.73s 2026-01-07 01:00:32.065597 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.06s 2026-01-07 01:00:32.065601 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.92s 2026-01-07 01:00:32.065605 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2026-01-07 01:00:32.065609 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2026-01-07 01:00:32.065612 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.63s 2026-01-07 01:00:32.065617 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-07 01:00:32.065621 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-07 01:00:32.065625 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-01-07 01:00:35.110701 | orchestrator | 2026-01-07 01:00:35 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:35.112129 | orchestrator | 2026-01-07 01:00:35 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:35.112220 | orchestrator | 2026-01-07 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:38.156817 | orchestrator | 2026-01-07 01:00:38 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:38.159262 | orchestrator | 2026-01-07 01:00:38 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:38.159324 | orchestrator | 2026-01-07 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:41.202156 | orchestrator | 2026-01-07 01:00:41 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:41.203637 | orchestrator | 2026-01-07 01:00:41 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:41.203731 | orchestrator | 2026-01-07 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:44.263797 | orchestrator | 2026-01-07 01:00:44 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:44.267074 | orchestrator | 2026-01-07 01:00:44 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:44.267125 | orchestrator | 2026-01-07 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:47.308852 | orchestrator | 2026-01-07 01:00:47 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:47.310889 | orchestrator | 2026-01-07 01:00:47 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:47.310929 | orchestrator | 2026-01-07 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:50.354482 | orchestrator | 2026-01-07 01:00:50 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:50.356491 | orchestrator | 2026-01-07 01:00:50 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:50.356538 | orchestrator | 2026-01-07 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:53.406328 | orchestrator | 2026-01-07 01:00:53 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:53.410261 | orchestrator | 2026-01-07 01:00:53 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state STARTED 2026-01-07 01:00:53.410350 | orchestrator | 2026-01-07 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:56.456545 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:00:56.458517 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:56.460896 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:00:56.465608 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task ad7c4e9f-9ab7-4550-a8f0-d0b0434bffca is in state SUCCESS 2026-01-07 01:00:56.467272 | orchestrator | 2026-01-07 01:00:56.467327 | orchestrator | 2026-01-07 01:00:56.467335 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-07 01:00:56.467341 | orchestrator | 2026-01-07 01:00:56.467346 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 01:00:56.467351 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:00.094) 0:00:00.094 ***** 2026-01-07 01:00:56.467356 | orchestrator | ok: [localhost] => { 2026-01-07 01:00:56.467362 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-07 01:00:56.467367 | orchestrator | } 2026-01-07 01:00:56.467372 | orchestrator | 2026-01-07 01:00:56.467377 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-07 01:00:56.467382 | orchestrator | Wednesday 07 January 2026 00:57:54 +0000 (0:00:00.051) 0:00:00.145 ***** 2026-01-07 01:00:56.467387 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-07 01:00:56.467393 | orchestrator | ...ignoring 2026-01-07 01:00:56.467397 | orchestrator | 2026-01-07 01:00:56.467402 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-07 01:00:56.467407 | orchestrator | Wednesday 07 January 2026 00:57:56 +0000 (0:00:02.888) 0:00:03.034 ***** 2026-01-07 01:00:56.467412 | orchestrator | skipping: [localhost] 2026-01-07 01:00:56.467416 | orchestrator | 2026-01-07 01:00:56.467420 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-07 01:00:56.467425 | orchestrator | Wednesday 07 January 2026 00:57:56 +0000 (0:00:00.062) 0:00:03.096 ***** 2026-01-07 01:00:56.467429 | orchestrator | ok: [localhost] 2026-01-07 01:00:56.467435 | orchestrator | 2026-01-07 01:00:56.467440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:00:56.467444 | orchestrator | 2026-01-07 01:00:56.467448 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:00:56.467453 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:00.153) 0:00:03.250 ***** 2026-01-07 01:00:56.467457 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.467461 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.467465 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.467469 | orchestrator | 2026-01-07 01:00:56.467473 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:00:56.467489 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:00.321) 0:00:03.571 ***** 2026-01-07 01:00:56.467688 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-07 01:00:56.467705 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-07 01:00:56.467709 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-07 01:00:56.467713 | orchestrator | 2026-01-07 01:00:56.467717 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-07 01:00:56.467721 | orchestrator | 2026-01-07 01:00:56.467725 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-07 01:00:56.467729 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:00.551) 0:00:04.123 ***** 2026-01-07 01:00:56.467733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 01:00:56.467737 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 01:00:56.467741 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 01:00:56.467744 | orchestrator | 2026-01-07 01:00:56.467748 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 01:00:56.467766 | orchestrator | Wednesday 07 January 2026 00:57:58 +0000 (0:00:00.370) 0:00:04.494 ***** 2026-01-07 01:00:56.467770 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:56.467775 | orchestrator | 2026-01-07 01:00:56.467778 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-07 01:00:56.467783 | orchestrator | Wednesday 07 January 2026 00:57:58 +0000 (0:00:00.485) 0:00:04.979 ***** 2026-01-07 01:00:56.467801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467825 | orchestrator | 2026-01-07 01:00:56.467834 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-07 01:00:56.467838 | orchestrator | Wednesday 07 January 2026 00:58:01 +0000 (0:00:03.062) 0:00:08.041 ***** 2026-01-07 01:00:56.467842 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.467847 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.467850 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.467854 | orchestrator | 2026-01-07 01:00:56.467858 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-07 01:00:56.467862 | orchestrator | Wednesday 07 January 2026 00:58:02 +0000 (0:00:00.811) 0:00:08.853 ***** 2026-01-07 01:00:56.467865 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.467869 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.467873 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.467877 | orchestrator | 2026-01-07 01:00:56.467881 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-07 01:00:56.467884 | orchestrator | Wednesday 07 January 2026 00:58:04 +0000 (0:00:01.558) 0:00:10.412 ***** 2026-01-07 01:00:56.467890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.467916 | orchestrator | 2026-01-07 01:00:56.467920 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-07 01:00:56.467924 | orchestrator | Wednesday 07 January 2026 00:58:08 +0000 (0:00:03.842) 0:00:14.254 ***** 2026-01-07 01:00:56.467928 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.467932 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.467936 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.467939 | orchestrator | 2026-01-07 01:00:56.467943 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-07 01:00:56.467947 | orchestrator | Wednesday 07 January 2026 00:58:09 +0000 (0:00:01.013) 0:00:15.268 ***** 2026-01-07 01:00:56.467951 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.467954 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:56.467958 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:56.467962 | orchestrator | 2026-01-07 01:00:56.467966 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 01:00:56.467970 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:03.906) 0:00:19.175 ***** 2026-01-07 01:00:56.467973 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:56.467977 | orchestrator | 2026-01-07 01:00:56.467981 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 01:00:56.467985 | orchestrator | Wednesday 07 January 2026 00:58:13 +0000 (0:00:00.613) 0:00:19.788 ***** 2026-01-07 01:00:56.467992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.467997 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468010 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468023 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468027 | orchestrator | 2026-01-07 01:00:56.468031 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 01:00:56.468034 | orchestrator | Wednesday 07 January 2026 00:58:16 +0000 (0:00:03.288) 0:00:23.077 ***** 2026-01-07 01:00:56.468040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468049 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468060 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468076 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468080 | orchestrator | 2026-01-07 01:00:56.468084 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 01:00:56.468088 | orchestrator | Wednesday 07 January 2026 00:58:20 +0000 (0:00:03.123) 0:00:26.200 ***** 2026-01-07 01:00:56.468092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468096 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468113 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 01:00:56.468122 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468125 | orchestrator | 2026-01-07 01:00:56.468129 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-07 01:00:56.468135 | orchestrator | Wednesday 07 January 2026 00:58:23 +0000 (0:00:03.463) 0:00:29.664 ***** 2026-01-07 01:00:56.468150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.468166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.468177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 01:00:56.468217 | orchestrator | 2026-01-07 01:00:56.468223 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-07 01:00:56.468229 | orchestrator | Wednesday 07 January 2026 00:58:27 +0000 (0:00:03.861) 0:00:33.525 ***** 2026-01-07 01:00:56.468235 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468241 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:56.468246 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:56.468253 | orchestrator | 2026-01-07 01:00:56.468258 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-07 01:00:56.468265 | orchestrator | Wednesday 07 January 2026 00:58:28 +0000 (0:00:00.755) 0:00:34.280 ***** 2026-01-07 01:00:56.468271 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468277 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468284 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468290 | orchestrator | 2026-01-07 01:00:56.468299 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-07 01:00:56.468305 | orchestrator | Wednesday 07 January 2026 00:58:28 +0000 (0:00:00.669) 0:00:34.950 ***** 2026-01-07 01:00:56.468309 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468314 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468319 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468324 | orchestrator | 2026-01-07 01:00:56.468328 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-07 01:00:56.468333 | orchestrator | Wednesday 07 January 2026 00:58:29 +0000 (0:00:00.382) 0:00:35.333 ***** 2026-01-07 01:00:56.468339 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-07 01:00:56.468344 | orchestrator | ...ignoring 2026-01-07 01:00:56.468349 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-07 01:00:56.468353 | orchestrator | ...ignoring 2026-01-07 01:00:56.468358 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-07 01:00:56.468362 | orchestrator | ...ignoring 2026-01-07 01:00:56.468367 | orchestrator | 2026-01-07 01:00:56.468371 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-07 01:00:56.468376 | orchestrator | Wednesday 07 January 2026 00:58:40 +0000 (0:00:11.021) 0:00:46.354 ***** 2026-01-07 01:00:56.468380 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468385 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468389 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468394 | orchestrator | 2026-01-07 01:00:56.468398 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-07 01:00:56.468403 | orchestrator | Wednesday 07 January 2026 00:58:40 +0000 (0:00:00.463) 0:00:46.818 ***** 2026-01-07 01:00:56.468407 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468412 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468417 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468421 | orchestrator | 2026-01-07 01:00:56.468430 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-07 01:00:56.468435 | orchestrator | Wednesday 07 January 2026 00:58:41 +0000 (0:00:00.628) 0:00:47.446 ***** 2026-01-07 01:00:56.468440 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468445 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468449 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468454 | orchestrator | 2026-01-07 01:00:56.468458 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-07 01:00:56.468463 | orchestrator | Wednesday 07 January 2026 00:58:41 +0000 (0:00:00.438) 0:00:47.885 ***** 2026-01-07 01:00:56.468468 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468473 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468477 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468482 | orchestrator | 2026-01-07 01:00:56.468488 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-07 01:00:56.468494 | orchestrator | Wednesday 07 January 2026 00:58:42 +0000 (0:00:00.483) 0:00:48.368 ***** 2026-01-07 01:00:56.468501 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468507 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468513 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468520 | orchestrator | 2026-01-07 01:00:56.468527 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-07 01:00:56.468534 | orchestrator | Wednesday 07 January 2026 00:58:42 +0000 (0:00:00.394) 0:00:48.763 ***** 2026-01-07 01:00:56.468544 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468550 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468557 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468560 | orchestrator | 2026-01-07 01:00:56.468564 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 01:00:56.468568 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.629) 0:00:49.392 ***** 2026-01-07 01:00:56.468572 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468576 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468580 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-07 01:00:56.468584 | orchestrator | 2026-01-07 01:00:56.468588 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-07 01:00:56.468592 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.381) 0:00:49.774 ***** 2026-01-07 01:00:56.468595 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468599 | orchestrator | 2026-01-07 01:00:56.468603 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-07 01:00:56.468607 | orchestrator | Wednesday 07 January 2026 00:58:54 +0000 (0:00:10.946) 0:01:00.720 ***** 2026-01-07 01:00:56.468610 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468614 | orchestrator | 2026-01-07 01:00:56.468618 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 01:00:56.468622 | orchestrator | Wednesday 07 January 2026 00:58:54 +0000 (0:00:00.127) 0:01:00.848 ***** 2026-01-07 01:00:56.468625 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468629 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468633 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468637 | orchestrator | 2026-01-07 01:00:56.468641 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-07 01:00:56.468644 | orchestrator | Wednesday 07 January 2026 00:58:55 +0000 (0:00:01.007) 0:01:01.855 ***** 2026-01-07 01:00:56.468648 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468652 | orchestrator | 2026-01-07 01:00:56.468656 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-07 01:00:56.468659 | orchestrator | Wednesday 07 January 2026 00:59:03 +0000 (0:00:08.006) 0:01:09.862 ***** 2026-01-07 01:00:56.468664 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468667 | orchestrator | 2026-01-07 01:00:56.468671 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-07 01:00:56.468682 | orchestrator | Wednesday 07 January 2026 00:59:05 +0000 (0:00:01.602) 0:01:11.464 ***** 2026-01-07 01:00:56.468686 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468690 | orchestrator | 2026-01-07 01:00:56.468693 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-07 01:00:56.468697 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:02.472) 0:01:13.937 ***** 2026-01-07 01:00:56.468701 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468705 | orchestrator | 2026-01-07 01:00:56.468709 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-07 01:00:56.468712 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:00.135) 0:01:14.073 ***** 2026-01-07 01:00:56.468716 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468720 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468724 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468728 | orchestrator | 2026-01-07 01:00:56.468732 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-07 01:00:56.468736 | orchestrator | Wednesday 07 January 2026 00:59:08 +0000 (0:00:00.347) 0:01:14.420 ***** 2026-01-07 01:00:56.468739 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.468743 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-07 01:00:56.468747 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:56.468751 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:56.468755 | orchestrator | 2026-01-07 01:00:56.468758 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-07 01:00:56.468762 | orchestrator | skipping: no hosts matched 2026-01-07 01:00:56.468766 | orchestrator | 2026-01-07 01:00:56.468770 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 01:00:56.468774 | orchestrator | 2026-01-07 01:00:56.468777 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 01:00:56.468781 | orchestrator | Wednesday 07 January 2026 00:59:08 +0000 (0:00:00.585) 0:01:15.006 ***** 2026-01-07 01:00:56.468785 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:56.468789 | orchestrator | 2026-01-07 01:00:56.468792 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 01:00:56.468796 | orchestrator | Wednesday 07 January 2026 00:59:26 +0000 (0:00:18.113) 0:01:33.119 ***** 2026-01-07 01:00:56.468800 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468804 | orchestrator | 2026-01-07 01:00:56.468808 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 01:00:56.468811 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:15.544) 0:01:48.663 ***** 2026-01-07 01:00:56.468815 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.468819 | orchestrator | 2026-01-07 01:00:56.468823 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 01:00:56.468826 | orchestrator | 2026-01-07 01:00:56.468830 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 01:00:56.468834 | orchestrator | Wednesday 07 January 2026 00:59:44 +0000 (0:00:02.340) 0:01:51.004 ***** 2026-01-07 01:00:56.468838 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:56.468841 | orchestrator | 2026-01-07 01:00:56.468845 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 01:00:56.468849 | orchestrator | Wednesday 07 January 2026 01:00:03 +0000 (0:00:18.782) 0:02:09.786 ***** 2026-01-07 01:00:56.468853 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468857 | orchestrator | 2026-01-07 01:00:56.468860 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 01:00:56.468864 | orchestrator | Wednesday 07 January 2026 01:00:19 +0000 (0:00:15.598) 0:02:25.384 ***** 2026-01-07 01:00:56.468868 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.468872 | orchestrator | 2026-01-07 01:00:56.468875 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-07 01:00:56.468879 | orchestrator | 2026-01-07 01:00:56.468885 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 01:00:56.468892 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:02.351) 0:02:27.736 ***** 2026-01-07 01:00:56.468896 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468900 | orchestrator | 2026-01-07 01:00:56.468904 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 01:00:56.468907 | orchestrator | Wednesday 07 January 2026 01:00:38 +0000 (0:00:17.005) 0:02:44.741 ***** 2026-01-07 01:00:56.468911 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468915 | orchestrator | 2026-01-07 01:00:56.468919 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 01:00:56.468923 | orchestrator | Wednesday 07 January 2026 01:00:39 +0000 (0:00:00.751) 0:02:45.493 ***** 2026-01-07 01:00:56.468927 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.468930 | orchestrator | 2026-01-07 01:00:56.468934 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-07 01:00:56.468938 | orchestrator | 2026-01-07 01:00:56.468942 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-07 01:00:56.468945 | orchestrator | Wednesday 07 January 2026 01:00:42 +0000 (0:00:02.846) 0:02:48.339 ***** 2026-01-07 01:00:56.468949 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:56.468954 | orchestrator | 2026-01-07 01:00:56.468958 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-07 01:00:56.468962 | orchestrator | Wednesday 07 January 2026 01:00:42 +0000 (0:00:00.530) 0:02:48.870 ***** 2026-01-07 01:00:56.468965 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468969 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468973 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.468977 | orchestrator | 2026-01-07 01:00:56.468981 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-07 01:00:56.468985 | orchestrator | Wednesday 07 January 2026 01:00:44 +0000 (0:00:02.199) 0:02:51.069 ***** 2026-01-07 01:00:56.468989 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.468993 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.468997 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.469001 | orchestrator | 2026-01-07 01:00:56.469005 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-07 01:00:56.469011 | orchestrator | Wednesday 07 January 2026 01:00:46 +0000 (0:00:01.881) 0:02:52.951 ***** 2026-01-07 01:00:56.469015 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.469019 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.469023 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.469027 | orchestrator | 2026-01-07 01:00:56.469031 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-07 01:00:56.469035 | orchestrator | Wednesday 07 January 2026 01:00:48 +0000 (0:00:02.119) 0:02:55.071 ***** 2026-01-07 01:00:56.469039 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.469043 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.469047 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:56.469051 | orchestrator | 2026-01-07 01:00:56.469054 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-07 01:00:56.469058 | orchestrator | Wednesday 07 January 2026 01:00:50 +0000 (0:00:01.996) 0:02:57.068 ***** 2026-01-07 01:00:56.469062 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:56.469066 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:56.469070 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:56.469074 | orchestrator | 2026-01-07 01:00:56.469077 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-07 01:00:56.469081 | orchestrator | Wednesday 07 January 2026 01:00:54 +0000 (0:00:03.232) 0:03:00.300 ***** 2026-01-07 01:00:56.469085 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:56.469089 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:56.469093 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:56.469097 | orchestrator | 2026-01-07 01:00:56.469103 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:56.469107 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 01:00:56.469112 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-07 01:00:56.469117 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-07 01:00:56.469121 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-07 01:00:56.469125 | orchestrator | 2026-01-07 01:00:56.469129 | orchestrator | 2026-01-07 01:00:56.469133 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:56.469137 | orchestrator | Wednesday 07 January 2026 01:00:54 +0000 (0:00:00.232) 0:03:00.533 ***** 2026-01-07 01:00:56.469141 | orchestrator | =============================================================================== 2026-01-07 01:00:56.469145 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.90s 2026-01-07 01:00:56.469148 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.14s 2026-01-07 01:00:56.469152 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.01s 2026-01-07 01:00:56.469156 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.02s 2026-01-07 01:00:56.469160 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.95s 2026-01-07 01:00:56.469164 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.01s 2026-01-07 01:00:56.469170 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.69s 2026-01-07 01:00:56.469174 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.91s 2026-01-07 01:00:56.469178 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.86s 2026-01-07 01:00:56.469196 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.84s 2026-01-07 01:00:56.469203 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.46s 2026-01-07 01:00:56.469207 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.29s 2026-01-07 01:00:56.469211 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.23s 2026-01-07 01:00:56.469215 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.12s 2026-01-07 01:00:56.469218 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.06s 2026-01-07 01:00:56.469222 | orchestrator | Check MariaDB service --------------------------------------------------- 2.89s 2026-01-07 01:00:56.469226 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2026-01-07 01:00:56.469230 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.47s 2026-01-07 01:00:56.469233 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.20s 2026-01-07 01:00:56.469237 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.12s 2026-01-07 01:00:56.469241 | orchestrator | 2026-01-07 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:59.515157 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:00:59.516580 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:00:59.518830 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:00:59.518956 | orchestrator | 2026-01-07 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:02.564792 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:02.566503 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:02.568295 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:02.568336 | orchestrator | 2026-01-07 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:05.604210 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:05.606156 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:05.607634 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:05.607689 | orchestrator | 2026-01-07 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:08.649389 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:08.649460 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:08.649947 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:08.649959 | orchestrator | 2026-01-07 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:11.689591 | orchestrator | 2026-01-07 01:01:11 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:11.690784 | orchestrator | 2026-01-07 01:01:11 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:11.691817 | orchestrator | 2026-01-07 01:01:11 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:11.691866 | orchestrator | 2026-01-07 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:14.730217 | orchestrator | 2026-01-07 01:01:14 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:14.731571 | orchestrator | 2026-01-07 01:01:14 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:14.733019 | orchestrator | 2026-01-07 01:01:14 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:14.733221 | orchestrator | 2026-01-07 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:17.774893 | orchestrator | 2026-01-07 01:01:17 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:17.775833 | orchestrator | 2026-01-07 01:01:17 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:17.776920 | orchestrator | 2026-01-07 01:01:17 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:17.776940 | orchestrator | 2026-01-07 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:20.822690 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:20.826544 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:20.829523 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:20.829578 | orchestrator | 2026-01-07 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:23.867549 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:23.867672 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:23.868794 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:23.868851 | orchestrator | 2026-01-07 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:26.909193 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:26.909723 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:26.911197 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:26.911270 | orchestrator | 2026-01-07 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:29.954197 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:29.955493 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:29.956840 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:29.956883 | orchestrator | 2026-01-07 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:33.002696 | orchestrator | 2026-01-07 01:01:33 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:33.004451 | orchestrator | 2026-01-07 01:01:33 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:33.006665 | orchestrator | 2026-01-07 01:01:33 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:33.006719 | orchestrator | 2026-01-07 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:36.053233 | orchestrator | 2026-01-07 01:01:36 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:36.055199 | orchestrator | 2026-01-07 01:01:36 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:36.058155 | orchestrator | 2026-01-07 01:01:36 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:36.058317 | orchestrator | 2026-01-07 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:39.093197 | orchestrator | 2026-01-07 01:01:39 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:39.095257 | orchestrator | 2026-01-07 01:01:39 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:39.096315 | orchestrator | 2026-01-07 01:01:39 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:39.096358 | orchestrator | 2026-01-07 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:42.144675 | orchestrator | 2026-01-07 01:01:42 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:42.145282 | orchestrator | 2026-01-07 01:01:42 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:42.145911 | orchestrator | 2026-01-07 01:01:42 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:42.145947 | orchestrator | 2026-01-07 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:45.195021 | orchestrator | 2026-01-07 01:01:45 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:45.196284 | orchestrator | 2026-01-07 01:01:45 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:45.197412 | orchestrator | 2026-01-07 01:01:45 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:45.197457 | orchestrator | 2026-01-07 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:48.251274 | orchestrator | 2026-01-07 01:01:48 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:48.251404 | orchestrator | 2026-01-07 01:01:48 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:48.251987 | orchestrator | 2026-01-07 01:01:48 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:48.252522 | orchestrator | 2026-01-07 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:51.296605 | orchestrator | 2026-01-07 01:01:51 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:51.299401 | orchestrator | 2026-01-07 01:01:51 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:51.301545 | orchestrator | 2026-01-07 01:01:51 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:51.301627 | orchestrator | 2026-01-07 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:54.360460 | orchestrator | 2026-01-07 01:01:54 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:54.360949 | orchestrator | 2026-01-07 01:01:54 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state STARTED 2026-01-07 01:01:54.362046 | orchestrator | 2026-01-07 01:01:54 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:54.362115 | orchestrator | 2026-01-07 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:57.408990 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:01:57.412088 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task b7e49769-cafc-48ac-8b1c-87f2fca1113c is in state SUCCESS 2026-01-07 01:01:57.414139 | orchestrator | 2026-01-07 01:01:57.414188 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 01:01:57.414196 | orchestrator | 2.16.14 2026-01-07 01:01:57.414203 | orchestrator | 2026-01-07 01:01:57.414209 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-07 01:01:57.414215 | orchestrator | 2026-01-07 01:01:57.414221 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 01:01:57.414227 | orchestrator | Wednesday 07 January 2026 00:59:47 +0000 (0:00:00.596) 0:00:00.596 ***** 2026-01-07 01:01:57.414233 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:01:57.414240 | orchestrator | 2026-01-07 01:01:57.414246 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 01:01:57.414252 | orchestrator | Wednesday 07 January 2026 00:59:47 +0000 (0:00:00.621) 0:00:01.217 ***** 2026-01-07 01:01:57.414259 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414265 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414270 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414275 | orchestrator | 2026-01-07 01:01:57.414281 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 01:01:57.414287 | orchestrator | Wednesday 07 January 2026 00:59:48 +0000 (0:00:00.558) 0:00:01.776 ***** 2026-01-07 01:01:57.414293 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414298 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414304 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414310 | orchestrator | 2026-01-07 01:01:57.414332 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 01:01:57.414338 | orchestrator | Wednesday 07 January 2026 00:59:48 +0000 (0:00:00.299) 0:00:02.075 ***** 2026-01-07 01:01:57.414344 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414350 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414356 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414361 | orchestrator | 2026-01-07 01:01:57.414367 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 01:01:57.414373 | orchestrator | Wednesday 07 January 2026 00:59:49 +0000 (0:00:00.822) 0:00:02.897 ***** 2026-01-07 01:01:57.414379 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414384 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414390 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414404 | orchestrator | 2026-01-07 01:01:57.414415 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 01:01:57.414420 | orchestrator | Wednesday 07 January 2026 00:59:49 +0000 (0:00:00.308) 0:00:03.206 ***** 2026-01-07 01:01:57.414426 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414431 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414437 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414443 | orchestrator | 2026-01-07 01:01:57.414448 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 01:01:57.414455 | orchestrator | Wednesday 07 January 2026 00:59:50 +0000 (0:00:00.306) 0:00:03.512 ***** 2026-01-07 01:01:57.414461 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414467 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414472 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414475 | orchestrator | 2026-01-07 01:01:57.414479 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 01:01:57.414483 | orchestrator | Wednesday 07 January 2026 00:59:50 +0000 (0:00:00.321) 0:00:03.834 ***** 2026-01-07 01:01:57.414486 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.414491 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.414494 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.414498 | orchestrator | 2026-01-07 01:01:57.414504 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 01:01:57.414601 | orchestrator | Wednesday 07 January 2026 00:59:50 +0000 (0:00:00.484) 0:00:04.319 ***** 2026-01-07 01:01:57.414609 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414615 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414620 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414827 | orchestrator | 2026-01-07 01:01:57.414834 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 01:01:57.414840 | orchestrator | Wednesday 07 January 2026 00:59:51 +0000 (0:00:00.283) 0:00:04.603 ***** 2026-01-07 01:01:57.414845 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:01:57.414851 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:01:57.414856 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:01:57.414862 | orchestrator | 2026-01-07 01:01:57.414867 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 01:01:57.414872 | orchestrator | Wednesday 07 January 2026 00:59:51 +0000 (0:00:00.631) 0:00:05.234 ***** 2026-01-07 01:01:57.414878 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.414883 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.414888 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.414893 | orchestrator | 2026-01-07 01:01:57.414899 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 01:01:57.414904 | orchestrator | Wednesday 07 January 2026 00:59:52 +0000 (0:00:00.467) 0:00:05.702 ***** 2026-01-07 01:01:57.414932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:01:57.414939 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:01:57.414951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:01:57.414957 | orchestrator | 2026-01-07 01:01:57.414962 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 01:01:57.414975 | orchestrator | Wednesday 07 January 2026 00:59:54 +0000 (0:00:02.287) 0:00:07.989 ***** 2026-01-07 01:01:57.414980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 01:01:57.414986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 01:01:57.414991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 01:01:57.414997 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415002 | orchestrator | 2026-01-07 01:01:57.415017 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 01:01:57.415022 | orchestrator | Wednesday 07 January 2026 00:59:55 +0000 (0:00:00.583) 0:00:08.573 ***** 2026-01-07 01:01:57.415026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415038 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415041 | orchestrator | 2026-01-07 01:01:57.415044 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 01:01:57.415048 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.910) 0:00:09.483 ***** 2026-01-07 01:01:57.415085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415103 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415106 | orchestrator | 2026-01-07 01:01:57.415130 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 01:01:57.415134 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.397) 0:00:09.880 ***** 2026-01-07 01:01:57.415139 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1ef34da82057', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:59:53.014771', 'end': '2026-01-07 00:59:53.058504', 'delta': '0:00:00.043733', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ef34da82057'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-07 01:01:57.415245 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '902e49121487', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:59:53.834529', 'end': '2026-01-07 00:59:53.878624', 'delta': '0:00:00.044095', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['902e49121487'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-07 01:01:57.415261 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ce098031a6bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:59:54.390776', 'end': '2026-01-07 00:59:54.435418', 'delta': '0:00:00.044642', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ce098031a6bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-07 01:01:57.415266 | orchestrator | 2026-01-07 01:01:57.415269 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 01:01:57.415272 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.220) 0:00:10.100 ***** 2026-01-07 01:01:57.415276 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.415279 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.415282 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.415285 | orchestrator | 2026-01-07 01:01:57.415289 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 01:01:57.415292 | orchestrator | Wednesday 07 January 2026 00:59:57 +0000 (0:00:00.480) 0:00:10.581 ***** 2026-01-07 01:01:57.415295 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-07 01:01:57.415299 | orchestrator | 2026-01-07 01:01:57.415302 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 01:01:57.415305 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:02.131) 0:00:12.712 ***** 2026-01-07 01:01:57.415309 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415312 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415315 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415318 | orchestrator | 2026-01-07 01:01:57.415322 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 01:01:57.415325 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:00.310) 0:00:13.023 ***** 2026-01-07 01:01:57.415328 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415331 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415335 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415338 | orchestrator | 2026-01-07 01:01:57.415341 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 01:01:57.415344 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:00.386) 0:00:13.410 ***** 2026-01-07 01:01:57.415347 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415351 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415357 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415360 | orchestrator | 2026-01-07 01:01:57.415363 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 01:01:57.415366 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:00.497) 0:00:13.907 ***** 2026-01-07 01:01:57.415370 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.415373 | orchestrator | 2026-01-07 01:01:57.415376 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 01:01:57.415379 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:00.123) 0:00:14.031 ***** 2026-01-07 01:01:57.415383 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415386 | orchestrator | 2026-01-07 01:01:57.415389 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 01:01:57.415392 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:00.250) 0:00:14.281 ***** 2026-01-07 01:01:57.415396 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415399 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415402 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415405 | orchestrator | 2026-01-07 01:01:57.415409 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 01:01:57.415412 | orchestrator | Wednesday 07 January 2026 01:00:01 +0000 (0:00:00.305) 0:00:14.587 ***** 2026-01-07 01:01:57.415415 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415418 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415422 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415425 | orchestrator | 2026-01-07 01:01:57.415428 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 01:01:57.415431 | orchestrator | Wednesday 07 January 2026 01:00:01 +0000 (0:00:00.314) 0:00:14.902 ***** 2026-01-07 01:01:57.415434 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415438 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415441 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415444 | orchestrator | 2026-01-07 01:01:57.415447 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 01:01:57.415451 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:00.549) 0:00:15.451 ***** 2026-01-07 01:01:57.415454 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415457 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415460 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415463 | orchestrator | 2026-01-07 01:01:57.415467 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 01:01:57.415470 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:00.338) 0:00:15.790 ***** 2026-01-07 01:01:57.415473 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415476 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415480 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415483 | orchestrator | 2026-01-07 01:01:57.415488 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 01:01:57.415491 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:00.314) 0:00:16.104 ***** 2026-01-07 01:01:57.415494 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415497 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415501 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415513 | orchestrator | 2026-01-07 01:01:57.415516 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 01:01:57.415520 | orchestrator | Wednesday 07 January 2026 01:00:03 +0000 (0:00:00.313) 0:00:16.417 ***** 2026-01-07 01:01:57.415523 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415526 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415529 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415533 | orchestrator | 2026-01-07 01:01:57.415536 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 01:01:57.415539 | orchestrator | Wednesday 07 January 2026 01:00:03 +0000 (0:00:00.541) 0:00:16.958 ***** 2026-01-07 01:01:57.415545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc', 'dm-uuid-LVM-8bK9ULb58KIMrsCGmdMXR1IVFLBmguBSIutgTi2cmowlm638qyWdp3yczOl3SY0m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e', 'dm-uuid-LVM-XAwDBKXsEIC3fWQHPh980GebvskQX2lbzqEPkgZKUqKZnnP9ltkb2SFHiz002pst'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7', 'dm-uuid-LVM-DZLvgoHJB2dzrj4NMm2HmBFaLg5fGwVRHPF1iBjynLE7kXuSlbDawfn32gGQsT1u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af', 'dm-uuid-LVM-L4I3js6ulS27pfsMVBMrKX9few3BpmSOtHpsW7yBtNLn2YGAnjQ3XyLOFZUDY4vy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vSGfPS-hxuf-Lufz-XfkE-Ywjk-bxjG-7FXmso', 'scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463', 'scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z9iqRN-JMb3-ozU2-CggA-cwEO-iE1D-Q0xhxz', 'scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e', 'scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac', 'scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415667 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.415670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82', 'dm-uuid-LVM-yQM0Ic07SLIwjKKbXxWvwWfr3QKZYWMVXtf7o6hMkya84FcGeHH44VtZxIsn328L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xtfbv2-27VD-p67v-3ENf-8igL-00RL-AN09d4', 'scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d', 'scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22', 'dm-uuid-LVM-MMe6XUI3c7bXIr2hZ1ceXdtZm1vNbondcwenie5XOI6Ph1DXfu59ts7jLTIYlfPa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DX02z4-RWLg-SM3n-sJQS-j5mJ-wBkD-ipzyi4', 'scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843', 'scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e', 'scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415733 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.415737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:01:57.415760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DrbFv-KJwt-cB5g-wqzQ-T0K1-euBu-O9L2Ra', 'scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7', 'scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Ejbey-1Tkt-iJTC-9Pct-AbyH-T5VC-rNly8E', 'scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616', 'scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5', 'scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:01:57.415784 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.415787 | orchestrator | 2026-01-07 01:01:57.415793 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 01:01:57.415799 | orchestrator | Wednesday 07 January 2026 01:00:04 +0000 (0:00:00.522) 0:00:17.480 ***** 2026-01-07 01:01:57.415804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc', 'dm-uuid-LVM-8bK9ULb58KIMrsCGmdMXR1IVFLBmguBSIutgTi2cmowlm638qyWdp3yczOl3SY0m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e', 'dm-uuid-LVM-XAwDBKXsEIC3fWQHPh980GebvskQX2lbzqEPkgZKUqKZnnP9ltkb2SFHiz002pst'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7', 'dm-uuid-LVM-DZLvgoHJB2dzrj4NMm2HmBFaLg5fGwVRHPF1iBjynLE7kXuSlbDawfn32gGQsT1u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af', 'dm-uuid-LVM-L4I3js6ulS27pfsMVBMrKX9few3BpmSOtHpsW7yBtNLn2YGAnjQ3XyLOFZUDY4vy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16', 'scsi-SQEMU_QEMU_HARDDISK_82128b42-724c-4521-9e38-07aa1eb87990-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc-osd--block--ef56a04c--76f1--5b5f--91f5--fd927a7d00fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vSGfPS-hxuf-Lufz-XfkE-Ywjk-bxjG-7FXmso', 'scsi-0QEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463', 'scsi-SQEMU_QEMU_HARDDISK_b31d70e3-b168-49a6-8859-8d7d4687e463'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d88365d-d1c8-462a-a122-3aa4d05825ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35426297--011a--51b2--a2d6--4f3d2a544c0e-osd--block--35426297--011a--51b2--a2d6--4f3d2a544c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z9iqRN-JMb3-ozU2-CggA-cwEO-iE1D-Q0xhxz', 'scsi-0QEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e', 'scsi-SQEMU_QEMU_HARDDISK_3408abb5-01eb-4a5b-916f-01f572b7843e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e6008a2--36a5--590e--8013--ca4c2218d3f7-osd--block--4e6008a2--36a5--590e--8013--ca4c2218d3f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xtfbv2-27VD-p67v-3ENf-8igL-00RL-AN09d4', 'scsi-0QEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d', 'scsi-SQEMU_QEMU_HARDDISK_259f5b3c-7b2e-4352-b31f-9bca396d8d3d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac', 'scsi-SQEMU_QEMU_HARDDISK_e64e84b9-7894-4a82-9b6d-98451d3876ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--16bf28f1--ae52--5ff4--8907--41e0bcdec1af-osd--block--16bf28f1--ae52--5ff4--8907--41e0bcdec1af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DX02z4-RWLg-SM3n-sJQS-j5mJ-wBkD-ipzyi4', 'scsi-0QEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843', 'scsi-SQEMU_QEMU_HARDDISK_4e087c0c-4e3c-44c7-8e14-59e041e19843'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82', 'dm-uuid-LVM-yQM0Ic07SLIwjKKbXxWvwWfr3QKZYWMVXtf7o6hMkya84FcGeHH44VtZxIsn328L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.415997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e', 'scsi-SQEMU_QEMU_HARDDISK_a08497b0-f7e1-49b2-88eb-3502c1ea5c7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22', 'dm-uuid-LVM-MMe6XUI3c7bXIr2hZ1ceXdtZm1vNbondcwenie5XOI6Ph1DXfu59ts7jLTIYlfPa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416005 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416019 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416079 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d30537dd-f05d-4658-af3c-1d08cd97752f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bbd296ce--f103--5a39--9243--23354e346d82-osd--block--bbd296ce--f103--5a39--9243--23354e346d82'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DrbFv-KJwt-cB5g-wqzQ-T0K1-euBu-O9L2Ra', 'scsi-0QEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7', 'scsi-SQEMU_QEMU_HARDDISK_e79c7a29-b83e-4f0d-b893-2f76efcc2de7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5711b466--e770--5253--91be--c96275afda22-osd--block--5711b466--e770--5253--91be--c96275afda22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Ejbey-1Tkt-iJTC-9Pct-AbyH-T5VC-rNly8E', 'scsi-0QEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616', 'scsi-SQEMU_QEMU_HARDDISK_fef6d06e-2e84-4523-b9f6-c646394c7616'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416111 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5', 'scsi-SQEMU_QEMU_HARDDISK_6ba210b4-a43a-450d-93ff-eb978033e3d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:01:57.416123 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416127 | orchestrator | 2026-01-07 01:01:57.416131 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 01:01:57.416135 | orchestrator | Wednesday 07 January 2026 01:00:04 +0000 (0:00:00.616) 0:00:18.097 ***** 2026-01-07 01:01:57.416139 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.416143 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.416147 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.416151 | orchestrator | 2026-01-07 01:01:57.416154 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 01:01:57.416158 | orchestrator | Wednesday 07 January 2026 01:00:05 +0000 (0:00:00.642) 0:00:18.740 ***** 2026-01-07 01:01:57.416162 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.416166 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.416169 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.416173 | orchestrator | 2026-01-07 01:01:57.416177 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 01:01:57.416181 | orchestrator | Wednesday 07 January 2026 01:00:05 +0000 (0:00:00.505) 0:00:19.245 ***** 2026-01-07 01:01:57.416184 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.416188 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.416195 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.416198 | orchestrator | 2026-01-07 01:01:57.416202 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 01:01:57.416206 | orchestrator | Wednesday 07 January 2026 01:00:06 +0000 (0:00:00.624) 0:00:19.869 ***** 2026-01-07 01:01:57.416210 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416214 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416218 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416222 | orchestrator | 2026-01-07 01:01:57.416226 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 01:01:57.416230 | orchestrator | Wednesday 07 January 2026 01:00:06 +0000 (0:00:00.300) 0:00:20.170 ***** 2026-01-07 01:01:57.416233 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416237 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416241 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416245 | orchestrator | 2026-01-07 01:01:57.416249 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 01:01:57.416253 | orchestrator | Wednesday 07 January 2026 01:00:07 +0000 (0:00:00.426) 0:00:20.597 ***** 2026-01-07 01:01:57.416257 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416260 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416263 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416266 | orchestrator | 2026-01-07 01:01:57.416270 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 01:01:57.416273 | orchestrator | Wednesday 07 January 2026 01:00:07 +0000 (0:00:00.552) 0:00:21.149 ***** 2026-01-07 01:01:57.416276 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 01:01:57.416280 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 01:01:57.416283 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 01:01:57.416286 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 01:01:57.416289 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 01:01:57.416292 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 01:01:57.416296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 01:01:57.416299 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 01:01:57.416302 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 01:01:57.416305 | orchestrator | 2026-01-07 01:01:57.416308 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 01:01:57.416312 | orchestrator | Wednesday 07 January 2026 01:00:08 +0000 (0:00:00.849) 0:00:21.999 ***** 2026-01-07 01:01:57.416315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 01:01:57.416318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 01:01:57.416321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 01:01:57.416325 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416328 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 01:01:57.416331 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 01:01:57.416334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 01:01:57.416337 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 01:01:57.416344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 01:01:57.416347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 01:01:57.416350 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416353 | orchestrator | 2026-01-07 01:01:57.416357 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 01:01:57.416360 | orchestrator | Wednesday 07 January 2026 01:00:09 +0000 (0:00:00.370) 0:00:22.370 ***** 2026-01-07 01:01:57.416366 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:01:57.416372 | orchestrator | 2026-01-07 01:01:57.416375 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 01:01:57.416379 | orchestrator | Wednesday 07 January 2026 01:00:09 +0000 (0:00:00.704) 0:00:23.075 ***** 2026-01-07 01:01:57.416384 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416388 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416391 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416394 | orchestrator | 2026-01-07 01:01:57.416397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 01:01:57.416401 | orchestrator | Wednesday 07 January 2026 01:00:10 +0000 (0:00:00.332) 0:00:23.408 ***** 2026-01-07 01:01:57.416404 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416407 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416410 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416413 | orchestrator | 2026-01-07 01:01:57.416417 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 01:01:57.416420 | orchestrator | Wednesday 07 January 2026 01:00:10 +0000 (0:00:00.316) 0:00:23.724 ***** 2026-01-07 01:01:57.416423 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416426 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416430 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:01:57.416433 | orchestrator | 2026-01-07 01:01:57.416436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 01:01:57.416439 | orchestrator | Wednesday 07 January 2026 01:00:10 +0000 (0:00:00.318) 0:00:24.042 ***** 2026-01-07 01:01:57.416442 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.416446 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.416449 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.416452 | orchestrator | 2026-01-07 01:01:57.416455 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 01:01:57.416460 | orchestrator | Wednesday 07 January 2026 01:00:11 +0000 (0:00:00.619) 0:00:24.662 ***** 2026-01-07 01:01:57.416466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:01:57.416471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:01:57.416480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:01:57.416485 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416491 | orchestrator | 2026-01-07 01:01:57.416496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 01:01:57.416501 | orchestrator | Wednesday 07 January 2026 01:00:11 +0000 (0:00:00.377) 0:00:25.040 ***** 2026-01-07 01:01:57.416506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:01:57.416511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:01:57.416516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:01:57.416521 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416526 | orchestrator | 2026-01-07 01:01:57.416531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 01:01:57.416534 | orchestrator | Wednesday 07 January 2026 01:00:12 +0000 (0:00:00.403) 0:00:25.444 ***** 2026-01-07 01:01:57.416537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:01:57.416541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:01:57.416544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:01:57.416547 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416550 | orchestrator | 2026-01-07 01:01:57.416553 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 01:01:57.416557 | orchestrator | Wednesday 07 January 2026 01:00:12 +0000 (0:00:00.373) 0:00:25.817 ***** 2026-01-07 01:01:57.416560 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:01:57.416563 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:01:57.416566 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:01:57.416572 | orchestrator | 2026-01-07 01:01:57.416576 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 01:01:57.416579 | orchestrator | Wednesday 07 January 2026 01:00:12 +0000 (0:00:00.334) 0:00:26.152 ***** 2026-01-07 01:01:57.416582 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 01:01:57.416585 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 01:01:57.416589 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 01:01:57.416592 | orchestrator | 2026-01-07 01:01:57.416595 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 01:01:57.416598 | orchestrator | Wednesday 07 January 2026 01:00:13 +0000 (0:00:00.509) 0:00:26.661 ***** 2026-01-07 01:01:57.416601 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:01:57.416605 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:01:57.416608 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:01:57.416611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 01:01:57.416614 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 01:01:57.416618 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 01:01:57.416621 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 01:01:57.416624 | orchestrator | 2026-01-07 01:01:57.416627 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 01:01:57.416630 | orchestrator | Wednesday 07 January 2026 01:00:14 +0000 (0:00:01.021) 0:00:27.683 ***** 2026-01-07 01:01:57.416634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:01:57.416637 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:01:57.416643 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:01:57.416646 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 01:01:57.416649 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 01:01:57.416652 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 01:01:57.416658 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 01:01:57.416661 | orchestrator | 2026-01-07 01:01:57.416665 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-07 01:01:57.416668 | orchestrator | Wednesday 07 January 2026 01:00:16 +0000 (0:00:02.008) 0:00:29.691 ***** 2026-01-07 01:01:57.416671 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:01:57.416674 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:01:57.416678 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-07 01:01:57.416683 | orchestrator | 2026-01-07 01:01:57.416688 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-07 01:01:57.416695 | orchestrator | Wednesday 07 January 2026 01:00:16 +0000 (0:00:00.370) 0:00:30.061 ***** 2026-01-07 01:01:57.416703 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:01:57.416708 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:01:57.416714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:01:57.416723 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:01:57.416728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:01:57.416733 | orchestrator | 2026-01-07 01:01:57.416738 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-07 01:01:57.416743 | orchestrator | Wednesday 07 January 2026 01:01:01 +0000 (0:00:44.356) 0:01:14.418 ***** 2026-01-07 01:01:57.416747 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416753 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416757 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416777 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-07 01:01:57.416783 | orchestrator | 2026-01-07 01:01:57.416788 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-07 01:01:57.416794 | orchestrator | Wednesday 07 January 2026 01:01:24 +0000 (0:00:23.274) 0:01:37.692 ***** 2026-01-07 01:01:57.416802 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416808 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416813 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416819 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416823 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416826 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416829 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 01:01:57.416832 | orchestrator | 2026-01-07 01:01:57.416835 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-07 01:01:57.416838 | orchestrator | Wednesday 07 January 2026 01:01:35 +0000 (0:00:11.416) 0:01:49.109 ***** 2026-01-07 01:01:57.416842 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416847 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416850 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416854 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416857 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416863 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416870 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416923 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416931 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416935 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416938 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416941 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416944 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416948 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:01:57.416951 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:01:57.416954 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:01:57.416957 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-07 01:01:57.416961 | orchestrator | 2026-01-07 01:01:57.416964 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:01:57.416967 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 01:01:57.416971 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-07 01:01:57.416975 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 01:01:57.416978 | orchestrator | 2026-01-07 01:01:57.416981 | orchestrator | 2026-01-07 01:01:57.416984 | orchestrator | 2026-01-07 01:01:57.416988 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:01:57.416991 | orchestrator | Wednesday 07 January 2026 01:01:55 +0000 (0:00:19.363) 0:02:08.473 ***** 2026-01-07 01:01:57.416994 | orchestrator | =============================================================================== 2026-01-07 01:01:57.416998 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.36s 2026-01-07 01:01:57.417001 | orchestrator | generate keys ---------------------------------------------------------- 23.27s 2026-01-07 01:01:57.417004 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.36s 2026-01-07 01:01:57.417007 | orchestrator | get keys from monitors ------------------------------------------------- 11.42s 2026-01-07 01:01:57.417011 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.29s 2026-01-07 01:01:57.417014 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.13s 2026-01-07 01:01:57.417017 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.01s 2026-01-07 01:01:57.417020 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2026-01-07 01:01:57.417023 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.91s 2026-01-07 01:01:57.417027 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-01-07 01:01:57.417030 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2026-01-07 01:01:57.417033 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-01-07 01:01:57.417036 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2026-01-07 01:01:57.417040 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-01-07 01:01:57.417043 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.62s 2026-01-07 01:01:57.417046 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2026-01-07 01:01:57.417065 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2026-01-07 01:01:57.417072 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-01-07 01:01:57.417076 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.58s 2026-01-07 01:01:57.417079 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.56s 2026-01-07 01:01:57.417082 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:01:57.417087 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:01:57.417121 | orchestrator | 2026-01-07 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:00.478458 | orchestrator | 2026-01-07 01:02:00 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:00.479828 | orchestrator | 2026-01-07 01:02:00 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:00.481898 | orchestrator | 2026-01-07 01:02:00 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:00.481930 | orchestrator | 2026-01-07 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:03.528906 | orchestrator | 2026-01-07 01:02:03 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:03.531138 | orchestrator | 2026-01-07 01:02:03 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:03.532936 | orchestrator | 2026-01-07 01:02:03 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:03.532976 | orchestrator | 2026-01-07 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:06.582791 | orchestrator | 2026-01-07 01:02:06 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:06.585039 | orchestrator | 2026-01-07 01:02:06 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:06.588249 | orchestrator | 2026-01-07 01:02:06 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:06.588295 | orchestrator | 2026-01-07 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:09.639437 | orchestrator | 2026-01-07 01:02:09 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:09.641104 | orchestrator | 2026-01-07 01:02:09 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:09.642711 | orchestrator | 2026-01-07 01:02:09 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:09.642745 | orchestrator | 2026-01-07 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:12.689748 | orchestrator | 2026-01-07 01:02:12 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:12.691234 | orchestrator | 2026-01-07 01:02:12 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:12.692577 | orchestrator | 2026-01-07 01:02:12 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:12.692632 | orchestrator | 2026-01-07 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:15.737823 | orchestrator | 2026-01-07 01:02:15 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:15.738312 | orchestrator | 2026-01-07 01:02:15 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:15.742132 | orchestrator | 2026-01-07 01:02:15 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:15.742226 | orchestrator | 2026-01-07 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:18.807964 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:18.809858 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:18.811779 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:18.811810 | orchestrator | 2026-01-07 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:21.859963 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:21.862381 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:21.865070 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:21.865834 | orchestrator | 2026-01-07 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:24.909460 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:24.910330 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:24.911839 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:24.911882 | orchestrator | 2026-01-07 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:27.952291 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:27.955655 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:27.958812 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:27.958873 | orchestrator | 2026-01-07 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:31.000627 | orchestrator | 2026-01-07 01:02:31 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:31.003104 | orchestrator | 2026-01-07 01:02:31 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state STARTED 2026-01-07 01:02:31.004891 | orchestrator | 2026-01-07 01:02:31 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state STARTED 2026-01-07 01:02:31.005002 | orchestrator | 2026-01-07 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:34.059486 | orchestrator | 2026-01-07 01:02:34 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:34.064162 | orchestrator | 2026-01-07 01:02:34 | INFO  | Task af2ff6f9-43e5-49e8-88ef-f33b28dd611b is in state SUCCESS 2026-01-07 01:02:34.067782 | orchestrator | 2026-01-07 01:02:34.067842 | orchestrator | 2026-01-07 01:02:34.067850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:02:34.067857 | orchestrator | 2026-01-07 01:02:34.067863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:02:34.067873 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-01-07 01:02:34.067881 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.067887 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.067893 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.067900 | orchestrator | 2026-01-07 01:02:34.067905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:02:34.067911 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.297) 0:00:00.555 ***** 2026-01-07 01:02:34.067934 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-07 01:02:34.067941 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-07 01:02:34.067948 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-07 01:02:34.067952 | orchestrator | 2026-01-07 01:02:34.067956 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-07 01:02:34.067960 | orchestrator | 2026-01-07 01:02:34.067964 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:02:34.067968 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.412) 0:00:00.967 ***** 2026-01-07 01:02:34.067987 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:34.067992 | orchestrator | 2026-01-07 01:02:34.067996 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-07 01:02:34.068000 | orchestrator | Wednesday 07 January 2026 01:01:00 +0000 (0:00:00.503) 0:00:01.470 ***** 2026-01-07 01:02:34.068016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.068034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.068047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.068054 | orchestrator | 2026-01-07 01:02:34.068058 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-07 01:02:34.068062 | orchestrator | Wednesday 07 January 2026 01:01:01 +0000 (0:00:01.171) 0:00:02.642 ***** 2026-01-07 01:02:34.068066 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068069 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068073 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068077 | orchestrator | 2026-01-07 01:02:34.068090 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:02:34.068097 | orchestrator | Wednesday 07 January 2026 01:01:02 +0000 (0:00:00.482) 0:00:03.124 ***** 2026-01-07 01:02:34.068101 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:02:34.068112 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:02:34.068122 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:02:34.068129 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:02:34.068136 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:02:34.068142 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:02:34.068148 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:02:34.068155 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:02:34.068162 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:02:34.068168 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:02:34.068172 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:02:34.068176 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:02:34.068180 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:02:34.068183 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:02:34.068187 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:02:34.068191 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:02:34.068195 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:02:34.068198 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:02:34.068202 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:02:34.068206 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:02:34.068210 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:02:34.068213 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:02:34.068217 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:02:34.068221 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:02:34.068225 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-07 01:02:34.068230 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-07 01:02:34.068234 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-07 01:02:34.068238 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-07 01:02:34.068242 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-07 01:02:34.068248 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-07 01:02:34.068256 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-07 01:02:34.068259 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-07 01:02:34.068263 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-07 01:02:34.068268 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-07 01:02:34.068271 | orchestrator | 2026-01-07 01:02:34.068275 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068279 | orchestrator | Wednesday 07 January 2026 01:01:02 +0000 (0:00:00.726) 0:00:03.851 ***** 2026-01-07 01:02:34.068283 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068287 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068290 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068294 | orchestrator | 2026-01-07 01:02:34.068298 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068302 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.290) 0:00:04.141 ***** 2026-01-07 01:02:34.068306 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068310 | orchestrator | 2026-01-07 01:02:34.068319 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068326 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.129) 0:00:04.271 ***** 2026-01-07 01:02:34.068334 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068342 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068430 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068441 | orchestrator | 2026-01-07 01:02:34.068448 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068455 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.468) 0:00:04.739 ***** 2026-01-07 01:02:34.068460 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068465 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068469 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068474 | orchestrator | 2026-01-07 01:02:34.068478 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068483 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.357) 0:00:05.096 ***** 2026-01-07 01:02:34.068487 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068492 | orchestrator | 2026-01-07 01:02:34.068496 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068501 | orchestrator | Wednesday 07 January 2026 01:01:04 +0000 (0:00:00.118) 0:00:05.214 ***** 2026-01-07 01:02:34.068505 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068510 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068517 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068525 | orchestrator | 2026-01-07 01:02:34.068533 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068540 | orchestrator | Wednesday 07 January 2026 01:01:04 +0000 (0:00:00.315) 0:00:05.530 ***** 2026-01-07 01:02:34.068545 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068552 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068559 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068565 | orchestrator | 2026-01-07 01:02:34.068571 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068578 | orchestrator | Wednesday 07 January 2026 01:01:04 +0000 (0:00:00.296) 0:00:05.826 ***** 2026-01-07 01:02:34.068584 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068590 | orchestrator | 2026-01-07 01:02:34.068596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068603 | orchestrator | Wednesday 07 January 2026 01:01:04 +0000 (0:00:00.126) 0:00:05.953 ***** 2026-01-07 01:02:34.068614 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068622 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068629 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068635 | orchestrator | 2026-01-07 01:02:34.068647 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068652 | orchestrator | Wednesday 07 January 2026 01:01:05 +0000 (0:00:00.537) 0:00:06.490 ***** 2026-01-07 01:02:34.068658 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068664 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068670 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068676 | orchestrator | 2026-01-07 01:02:34.068682 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068686 | orchestrator | Wednesday 07 January 2026 01:01:05 +0000 (0:00:00.348) 0:00:06.839 ***** 2026-01-07 01:02:34.068690 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068693 | orchestrator | 2026-01-07 01:02:34.068697 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068701 | orchestrator | Wednesday 07 January 2026 01:01:05 +0000 (0:00:00.137) 0:00:06.977 ***** 2026-01-07 01:02:34.068705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068709 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068712 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068716 | orchestrator | 2026-01-07 01:02:34.068720 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068724 | orchestrator | Wednesday 07 January 2026 01:01:06 +0000 (0:00:00.302) 0:00:07.280 ***** 2026-01-07 01:02:34.068727 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068731 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068735 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068739 | orchestrator | 2026-01-07 01:02:34.068742 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068750 | orchestrator | Wednesday 07 January 2026 01:01:06 +0000 (0:00:00.490) 0:00:07.770 ***** 2026-01-07 01:02:34.068754 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068757 | orchestrator | 2026-01-07 01:02:34.068761 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068765 | orchestrator | Wednesday 07 January 2026 01:01:06 +0000 (0:00:00.134) 0:00:07.905 ***** 2026-01-07 01:02:34.068769 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068773 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068776 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068780 | orchestrator | 2026-01-07 01:02:34.068784 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068788 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:00.311) 0:00:08.216 ***** 2026-01-07 01:02:34.068792 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068795 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068799 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068803 | orchestrator | 2026-01-07 01:02:34.068807 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.068810 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:00.363) 0:00:08.580 ***** 2026-01-07 01:02:34.068814 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068818 | orchestrator | 2026-01-07 01:02:34.068822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.068825 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:00.129) 0:00:08.710 ***** 2026-01-07 01:02:34.068829 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.068833 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.068837 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.068840 | orchestrator | 2026-01-07 01:02:34.068846 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.068860 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:00.360) 0:00:09.070 ***** 2026-01-07 01:02:34.068873 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.068879 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.068886 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.068892 | orchestrator | 2026-01-07 01:02:34.069042 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.069046 | orchestrator | Wednesday 07 January 2026 01:01:08 +0000 (0:00:00.561) 0:00:09.632 ***** 2026-01-07 01:02:34.069050 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069054 | orchestrator | 2026-01-07 01:02:34.069057 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.069061 | orchestrator | Wednesday 07 January 2026 01:01:08 +0000 (0:00:00.126) 0:00:09.758 ***** 2026-01-07 01:02:34.069065 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069069 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069073 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069077 | orchestrator | 2026-01-07 01:02:34.069080 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.069084 | orchestrator | Wednesday 07 January 2026 01:01:08 +0000 (0:00:00.273) 0:00:10.032 ***** 2026-01-07 01:02:34.069088 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.069092 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.069096 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.069099 | orchestrator | 2026-01-07 01:02:34.069103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.069107 | orchestrator | Wednesday 07 January 2026 01:01:09 +0000 (0:00:00.313) 0:00:10.345 ***** 2026-01-07 01:02:34.069111 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069114 | orchestrator | 2026-01-07 01:02:34.069118 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.069122 | orchestrator | Wednesday 07 January 2026 01:01:09 +0000 (0:00:00.138) 0:00:10.483 ***** 2026-01-07 01:02:34.069126 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069130 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069133 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069137 | orchestrator | 2026-01-07 01:02:34.069141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.069145 | orchestrator | Wednesday 07 January 2026 01:01:09 +0000 (0:00:00.325) 0:00:10.808 ***** 2026-01-07 01:02:34.069148 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.069152 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.069158 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.069165 | orchestrator | 2026-01-07 01:02:34.069171 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.069177 | orchestrator | Wednesday 07 January 2026 01:01:10 +0000 (0:00:00.521) 0:00:11.330 ***** 2026-01-07 01:02:34.069183 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069190 | orchestrator | 2026-01-07 01:02:34.069197 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.069204 | orchestrator | Wednesday 07 January 2026 01:01:10 +0000 (0:00:00.127) 0:00:11.458 ***** 2026-01-07 01:02:34.069211 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069215 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069219 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069225 | orchestrator | 2026-01-07 01:02:34.069234 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:02:34.069243 | orchestrator | Wednesday 07 January 2026 01:01:10 +0000 (0:00:00.306) 0:00:11.764 ***** 2026-01-07 01:02:34.069249 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:34.069254 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:34.069261 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:34.069266 | orchestrator | 2026-01-07 01:02:34.069272 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:34.069278 | orchestrator | Wednesday 07 January 2026 01:01:10 +0000 (0:00:00.295) 0:00:12.060 ***** 2026-01-07 01:02:34.069289 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069295 | orchestrator | 2026-01-07 01:02:34.069301 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:02:34.069306 | orchestrator | Wednesday 07 January 2026 01:01:11 +0000 (0:00:00.140) 0:00:12.200 ***** 2026-01-07 01:02:34.069312 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069318 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069325 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069332 | orchestrator | 2026-01-07 01:02:34.069342 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-07 01:02:34.069348 | orchestrator | Wednesday 07 January 2026 01:01:11 +0000 (0:00:00.480) 0:00:12.681 ***** 2026-01-07 01:02:34.069352 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:34.069356 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:34.069360 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:34.069363 | orchestrator | 2026-01-07 01:02:34.069367 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-07 01:02:34.069371 | orchestrator | Wednesday 07 January 2026 01:01:13 +0000 (0:00:01.785) 0:00:14.467 ***** 2026-01-07 01:02:34.069375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:02:34.069379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:02:34.069383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:02:34.069387 | orchestrator | 2026-01-07 01:02:34.069391 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-07 01:02:34.069394 | orchestrator | Wednesday 07 January 2026 01:01:15 +0000 (0:00:01.886) 0:00:16.353 ***** 2026-01-07 01:02:34.069398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:02:34.069403 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:02:34.069406 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:02:34.069410 | orchestrator | 2026-01-07 01:02:34.069414 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-07 01:02:34.069422 | orchestrator | Wednesday 07 January 2026 01:01:17 +0000 (0:00:02.245) 0:00:18.599 ***** 2026-01-07 01:02:34.069426 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:02:34.069430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:02:34.069434 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:02:34.069438 | orchestrator | 2026-01-07 01:02:34.069442 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-07 01:02:34.069445 | orchestrator | Wednesday 07 January 2026 01:01:19 +0000 (0:00:02.002) 0:00:20.602 ***** 2026-01-07 01:02:34.069449 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069453 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069457 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069461 | orchestrator | 2026-01-07 01:02:34.069464 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-07 01:02:34.069469 | orchestrator | Wednesday 07 January 2026 01:01:19 +0000 (0:00:00.317) 0:00:20.919 ***** 2026-01-07 01:02:34.069475 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069481 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069487 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069494 | orchestrator | 2026-01-07 01:02:34.069500 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:02:34.069506 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.295) 0:00:21.215 ***** 2026-01-07 01:02:34.069512 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:34.069520 | orchestrator | 2026-01-07 01:02:34.069524 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-07 01:02:34.069527 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.748) 0:00:21.963 ***** 2026-01-07 01:02:34.069536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069582 | orchestrator | 2026-01-07 01:02:34.069589 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-07 01:02:34.069594 | orchestrator | Wednesday 07 January 2026 01:01:22 +0000 (0:00:01.524) 0:00:23.487 ***** 2026-01-07 01:02:34.069606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069618 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069638 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069656 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069662 | orchestrator | 2026-01-07 01:02:34.069667 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-07 01:02:34.069671 | orchestrator | Wednesday 07 January 2026 01:01:23 +0000 (0:00:00.642) 0:00:24.130 ***** 2026-01-07 01:02:34.069682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069687 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069700 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:02:34.069720 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069725 | orchestrator | 2026-01-07 01:02:34.069730 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-07 01:02:34.069734 | orchestrator | Wednesday 07 January 2026 01:01:23 +0000 (0:00:00.803) 0:00:24.933 ***** 2026-01-07 01:02:34.069742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:02:34.069768 | orchestrator | 2026-01-07 01:02:34.069772 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:02:34.069777 | orchestrator | Wednesday 07 January 2026 01:01:25 +0000 (0:00:01.813) 0:00:26.747 ***** 2026-01-07 01:02:34.069782 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:34.069787 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:34.069792 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:34.069796 | orchestrator | 2026-01-07 01:02:34.069801 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:02:34.069806 | orchestrator | Wednesday 07 January 2026 01:01:25 +0000 (0:00:00.324) 0:00:27.071 ***** 2026-01-07 01:02:34.069810 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:34.069815 | orchestrator | 2026-01-07 01:02:34.069820 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-07 01:02:34.069832 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:00.578) 0:00:27.650 ***** 2026-01-07 01:02:34.069839 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:34.069845 | orchestrator | 2026-01-07 01:02:34.069851 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-07 01:02:34.069858 | orchestrator | Wednesday 07 January 2026 01:01:29 +0000 (0:00:02.727) 0:00:30.377 ***** 2026-01-07 01:02:34.069864 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:34.069870 | orchestrator | 2026-01-07 01:02:34.069876 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-07 01:02:34.069883 | orchestrator | Wednesday 07 January 2026 01:01:31 +0000 (0:00:02.580) 0:00:32.958 ***** 2026-01-07 01:02:34.069889 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:34.069895 | orchestrator | 2026-01-07 01:02:34.069902 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:02:34.069909 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:16.279) 0:00:49.237 ***** 2026-01-07 01:02:34.069916 | orchestrator | 2026-01-07 01:02:34.069923 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:02:34.069929 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.066) 0:00:49.304 ***** 2026-01-07 01:02:34.069936 | orchestrator | 2026-01-07 01:02:34.069940 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:02:34.069945 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.074) 0:00:49.378 ***** 2026-01-07 01:02:34.069949 | orchestrator | 2026-01-07 01:02:34.069954 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-07 01:02:34.069958 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.070) 0:00:49.448 ***** 2026-01-07 01:02:34.069963 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:34.069969 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:34.070091 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:34.070098 | orchestrator | 2026-01-07 01:02:34.070104 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:34.070111 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 01:02:34.070118 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-07 01:02:34.070125 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-07 01:02:34.070132 | orchestrator | 2026-01-07 01:02:34.070138 | orchestrator | 2026-01-07 01:02:34.070144 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:34.070148 | orchestrator | Wednesday 07 January 2026 01:02:32 +0000 (0:00:43.817) 0:01:33.266 ***** 2026-01-07 01:02:34.070152 | orchestrator | =============================================================================== 2026-01-07 01:02:34.070156 | orchestrator | horizon : Restart horizon container ------------------------------------ 43.82s 2026-01-07 01:02:34.070160 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.28s 2026-01-07 01:02:34.070163 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.73s 2026-01-07 01:02:34.070167 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.58s 2026-01-07 01:02:34.070171 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.25s 2026-01-07 01:02:34.070175 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.00s 2026-01-07 01:02:34.070179 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-01-07 01:02:34.070182 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.81s 2026-01-07 01:02:34.070186 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.79s 2026-01-07 01:02:34.070196 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2026-01-07 01:02:34.070203 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2026-01-07 01:02:34.070207 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-01-07 01:02:34.070211 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-01-07 01:02:34.070214 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-01-07 01:02:34.070218 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-01-07 01:02:34.070222 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-01-07 01:02:34.070226 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-01-07 01:02:34.070229 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-01-07 01:02:34.070233 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-01-07 01:02:34.070237 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-01-07 01:02:34.070241 | orchestrator | 2026-01-07 01:02:34 | INFO  | Task 497c70a7-7f2a-4e5a-b3d1-9620c19555f7 is in state SUCCESS 2026-01-07 01:02:34.070245 | orchestrator | 2026-01-07 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:37.127919 | orchestrator | 2026-01-07 01:02:37 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:37.131017 | orchestrator | 2026-01-07 01:02:37 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:37.132042 | orchestrator | 2026-01-07 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:40.188828 | orchestrator | 2026-01-07 01:02:40 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:40.189097 | orchestrator | 2026-01-07 01:02:40 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:40.190272 | orchestrator | 2026-01-07 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:43.233884 | orchestrator | 2026-01-07 01:02:43 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:43.235827 | orchestrator | 2026-01-07 01:02:43 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:43.235879 | orchestrator | 2026-01-07 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:46.290142 | orchestrator | 2026-01-07 01:02:46 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:46.295804 | orchestrator | 2026-01-07 01:02:46 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:46.295853 | orchestrator | 2026-01-07 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:49.345999 | orchestrator | 2026-01-07 01:02:49 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:49.347188 | orchestrator | 2026-01-07 01:02:49 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:49.347223 | orchestrator | 2026-01-07 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:52.383327 | orchestrator | 2026-01-07 01:02:52 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:52.388246 | orchestrator | 2026-01-07 01:02:52 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:52.388983 | orchestrator | 2026-01-07 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:55.428176 | orchestrator | 2026-01-07 01:02:55 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:55.430139 | orchestrator | 2026-01-07 01:02:55 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:55.430194 | orchestrator | 2026-01-07 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:58.474854 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:02:58.476097 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:02:58.476153 | orchestrator | 2026-01-07 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:01.528784 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:01.529883 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:01.529968 | orchestrator | 2026-01-07 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:04.571606 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:04.576790 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:04.576880 | orchestrator | 2026-01-07 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:07.625350 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:07.627046 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:07.627131 | orchestrator | 2026-01-07 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:10.674263 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:10.676759 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:10.676879 | orchestrator | 2026-01-07 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:13.726578 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:13.728860 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:13.729021 | orchestrator | 2026-01-07 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:16.778364 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:16.780987 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:16.781039 | orchestrator | 2026-01-07 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:19.822091 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:19.823257 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:19.823305 | orchestrator | 2026-01-07 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:22.870826 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:22.872732 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:22.872800 | orchestrator | 2026-01-07 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:25.926256 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:25.927149 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:25.927195 | orchestrator | 2026-01-07 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:28.967422 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:28.969182 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:28.969427 | orchestrator | 2026-01-07 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:32.022749 | orchestrator | 2026-01-07 01:03:32 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:32.023744 | orchestrator | 2026-01-07 01:03:32 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:32.023812 | orchestrator | 2026-01-07 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:35.074110 | orchestrator | 2026-01-07 01:03:35 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state STARTED 2026-01-07 01:03:35.074202 | orchestrator | 2026-01-07 01:03:35 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state STARTED 2026-01-07 01:03:35.074273 | orchestrator | 2026-01-07 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:38.102468 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:38.104227 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task e5b45a27-f8aa-4972-93a1-7672ac840f36 is in state STARTED 2026-01-07 01:03:38.106234 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task e14e247e-09de-49bf-abc6-5c664de68816 is in state SUCCESS 2026-01-07 01:03:38.107731 | orchestrator | 2026-01-07 01:03:38.107781 | orchestrator | 2026-01-07 01:03:38.107792 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-07 01:03:38.107801 | orchestrator | 2026-01-07 01:03:38.107823 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-07 01:03:38.107828 | orchestrator | Wednesday 07 January 2026 01:01:59 +0000 (0:00:00.159) 0:00:00.159 ***** 2026-01-07 01:03:38.107833 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 01:03:38.107877 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.107884 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.107891 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:03:38.107898 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.107905 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 01:03:38.107912 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 01:03:38.107922 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:03:38.107931 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 01:03:38.107937 | orchestrator | 2026-01-07 01:03:38.107943 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-07 01:03:38.107951 | orchestrator | Wednesday 07 January 2026 01:02:04 +0000 (0:00:04.291) 0:00:04.450 ***** 2026-01-07 01:03:38.107985 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 01:03:38.107991 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.107998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108005 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:03:38.108011 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108017 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 01:03:38.108024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 01:03:38.108031 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:03:38.108039 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 01:03:38.108045 | orchestrator | 2026-01-07 01:03:38.108052 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-07 01:03:38.108058 | orchestrator | Wednesday 07 January 2026 01:02:08 +0000 (0:00:04.200) 0:00:08.651 ***** 2026-01-07 01:03:38.108063 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 01:03:38.108076 | orchestrator | 2026-01-07 01:03:38.108081 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-07 01:03:38.108086 | orchestrator | Wednesday 07 January 2026 01:02:09 +0000 (0:00:01.047) 0:00:09.699 ***** 2026-01-07 01:03:38.108090 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-07 01:03:38.108095 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108099 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108103 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:03:38.108108 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108112 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-07 01:03:38.108116 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-07 01:03:38.108132 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:03:38.108136 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-07 01:03:38.108140 | orchestrator | 2026-01-07 01:03:38.108144 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-07 01:03:38.108149 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:13.766) 0:00:23.466 ***** 2026-01-07 01:03:38.108153 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-07 01:03:38.108157 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-07 01:03:38.108162 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 01:03:38.108166 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 01:03:38.108181 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 01:03:38.108196 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 01:03:38.108203 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-07 01:03:38.108214 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-07 01:03:38.108230 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-07 01:03:38.108237 | orchestrator | 2026-01-07 01:03:38.108253 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-07 01:03:38.108260 | orchestrator | Wednesday 07 January 2026 01:02:26 +0000 (0:00:03.090) 0:00:26.556 ***** 2026-01-07 01:03:38.108267 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-07 01:03:38.108275 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108281 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108287 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:03:38.108294 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:03:38.108301 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-07 01:03:38.108309 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-07 01:03:38.108315 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:03:38.108322 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-07 01:03:38.108329 | orchestrator | 2026-01-07 01:03:38.108336 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:03:38.108341 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:03:38.108348 | orchestrator | 2026-01-07 01:03:38.108353 | orchestrator | 2026-01-07 01:03:38.108358 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:03:38.108362 | orchestrator | Wednesday 07 January 2026 01:02:33 +0000 (0:00:07.005) 0:00:33.562 ***** 2026-01-07 01:03:38.108367 | orchestrator | =============================================================================== 2026-01-07 01:03:38.108372 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.77s 2026-01-07 01:03:38.108377 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.01s 2026-01-07 01:03:38.108382 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.29s 2026-01-07 01:03:38.108388 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.20s 2026-01-07 01:03:38.108393 | orchestrator | Check if target directories exist --------------------------------------- 3.09s 2026-01-07 01:03:38.108398 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-01-07 01:03:38.108404 | orchestrator | 2026-01-07 01:03:38.108409 | orchestrator | 2026-01-07 01:03:38.108414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:03:38.108419 | orchestrator | 2026-01-07 01:03:38.108424 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:03:38.108429 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-01-07 01:03:38.108434 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.108440 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:38.108444 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:38.108448 | orchestrator | 2026-01-07 01:03:38.108452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:03:38.108456 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.281) 0:00:00.534 ***** 2026-01-07 01:03:38.108460 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 01:03:38.108465 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 01:03:38.108469 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 01:03:38.108473 | orchestrator | 2026-01-07 01:03:38.108478 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-07 01:03:38.108482 | orchestrator | 2026-01-07 01:03:38.108486 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.108494 | orchestrator | Wednesday 07 January 2026 01:00:59 +0000 (0:00:00.424) 0:00:00.959 ***** 2026-01-07 01:03:38.108498 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:03:38.108502 | orchestrator | 2026-01-07 01:03:38.108506 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-07 01:03:38.108511 | orchestrator | Wednesday 07 January 2026 01:01:00 +0000 (0:00:00.516) 0:00:01.475 ***** 2026-01-07 01:03:38.108574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108646 | orchestrator | 2026-01-07 01:03:38.108650 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-07 01:03:38.108655 | orchestrator | Wednesday 07 January 2026 01:01:01 +0000 (0:00:01.598) 0:00:03.074 ***** 2026-01-07 01:03:38.108714 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.108719 | orchestrator | 2026-01-07 01:03:38.108724 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-07 01:03:38.108728 | orchestrator | Wednesday 07 January 2026 01:01:01 +0000 (0:00:00.132) 0:00:03.207 ***** 2026-01-07 01:03:38.108732 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.108736 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.108745 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.108749 | orchestrator | 2026-01-07 01:03:38.108753 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-07 01:03:38.108757 | orchestrator | Wednesday 07 January 2026 01:01:02 +0000 (0:00:00.492) 0:00:03.700 ***** 2026-01-07 01:03:38.108761 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:03:38.108766 | orchestrator | 2026-01-07 01:03:38.108770 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.108774 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.851) 0:00:04.551 ***** 2026-01-07 01:03:38.108778 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:03:38.108782 | orchestrator | 2026-01-07 01:03:38.108787 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-07 01:03:38.108791 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:00.546) 0:00:05.098 ***** 2026-01-07 01:03:38.108800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.108823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.108869 | orchestrator | 2026-01-07 01:03:38.108873 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-07 01:03:38.108881 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:03.598) 0:00:08.696 ***** 2026-01-07 01:03:38.108886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.108890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.108960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.108987 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109018 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109042 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109047 | orchestrator | 2026-01-07 01:03:38.109051 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-07 01:03:38.109055 | orchestrator | Wednesday 07 January 2026 01:01:08 +0000 (0:00:00.896) 0:00:09.593 ***** 2026-01-07 01:03:38.109060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109078 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109102 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109123 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109128 | orchestrator | 2026-01-07 01:03:38.109132 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-07 01:03:38.109136 | orchestrator | Wednesday 07 January 2026 01:01:09 +0000 (0:00:00.824) 0:00:10.417 ***** 2026-01-07 01:03:38.109147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109204 | orchestrator | 2026-01-07 01:03:38.109208 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-07 01:03:38.109212 | orchestrator | Wednesday 07 January 2026 01:01:12 +0000 (0:00:03.688) 0:00:14.106 ***** 2026-01-07 01:03:38.109217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109282 | orchestrator | 2026-01-07 01:03:38.109286 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-07 01:03:38.109291 | orchestrator | Wednesday 07 January 2026 01:01:18 +0000 (0:00:05.285) 0:00:19.392 ***** 2026-01-07 01:03:38.109295 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.109299 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:38.109304 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:38.109308 | orchestrator | 2026-01-07 01:03:38.109312 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-07 01:03:38.109317 | orchestrator | Wednesday 07 January 2026 01:01:19 +0000 (0:00:01.363) 0:00:20.755 ***** 2026-01-07 01:03:38.109321 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109325 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109330 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109334 | orchestrator | 2026-01-07 01:03:38.109338 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-07 01:03:38.109342 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.529) 0:00:21.285 ***** 2026-01-07 01:03:38.109346 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109350 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109355 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109359 | orchestrator | 2026-01-07 01:03:38.109363 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-07 01:03:38.109367 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.306) 0:00:21.592 ***** 2026-01-07 01:03:38.109371 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109376 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109380 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109384 | orchestrator | 2026-01-07 01:03:38.109389 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-07 01:03:38.109393 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.545) 0:00:22.138 ***** 2026-01-07 01:03:38.109398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109421 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109439 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:03:38.109452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:03:38.109462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:03:38.109467 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109471 | orchestrator | 2026-01-07 01:03:38.109476 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.109480 | orchestrator | Wednesday 07 January 2026 01:01:21 +0000 (0:00:00.704) 0:00:22.842 ***** 2026-01-07 01:03:38.109484 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109493 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109497 | orchestrator | 2026-01-07 01:03:38.109502 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-07 01:03:38.109506 | orchestrator | Wednesday 07 January 2026 01:01:21 +0000 (0:00:00.296) 0:00:23.138 ***** 2026-01-07 01:03:38.109510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:03:38.109515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:03:38.109519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:03:38.109523 | orchestrator | 2026-01-07 01:03:38.109528 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-07 01:03:38.109532 | orchestrator | Wednesday 07 January 2026 01:01:23 +0000 (0:00:01.530) 0:00:24.669 ***** 2026-01-07 01:03:38.109536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:03:38.109541 | orchestrator | 2026-01-07 01:03:38.109545 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-07 01:03:38.109549 | orchestrator | Wednesday 07 January 2026 01:01:24 +0000 (0:00:00.989) 0:00:25.658 ***** 2026-01-07 01:03:38.109554 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109558 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109562 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109566 | orchestrator | 2026-01-07 01:03:38.109571 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-07 01:03:38.109575 | orchestrator | Wednesday 07 January 2026 01:01:25 +0000 (0:00:00.952) 0:00:26.611 ***** 2026-01-07 01:03:38.109579 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:03:38.109584 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 01:03:38.109588 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 01:03:38.109592 | orchestrator | 2026-01-07 01:03:38.109597 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-07 01:03:38.109601 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:01.095) 0:00:27.706 ***** 2026-01-07 01:03:38.109609 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.109614 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:38.109619 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:38.109623 | orchestrator | 2026-01-07 01:03:38.109627 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-07 01:03:38.109631 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:00.335) 0:00:28.042 ***** 2026-01-07 01:03:38.109636 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:03:38.109640 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:03:38.109644 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:03:38.109648 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:03:38.109653 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:03:38.109657 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:03:38.109661 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:03:38.109666 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:03:38.109671 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:03:38.109675 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:03:38.109679 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:03:38.109683 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:03:38.109687 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:03:38.109692 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:03:38.109699 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:03:38.109703 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:03:38.109711 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:03:38.109716 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:03:38.109720 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:03:38.109724 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:03:38.109729 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:03:38.109733 | orchestrator | 2026-01-07 01:03:38.109737 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-07 01:03:38.109742 | orchestrator | Wednesday 07 January 2026 01:01:35 +0000 (0:00:08.657) 0:00:36.700 ***** 2026-01-07 01:03:38.109746 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:03:38.109750 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:03:38.109755 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:03:38.109760 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:03:38.109764 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:03:38.109769 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:03:38.109776 | orchestrator | 2026-01-07 01:03:38.109780 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-07 01:03:38.109785 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:03.282) 0:00:39.983 ***** 2026-01-07 01:03:38.109790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:03:38.109810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:03:38.109903 | orchestrator | 2026-01-07 01:03:38.109911 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.109916 | orchestrator | Wednesday 07 January 2026 01:01:41 +0000 (0:00:02.791) 0:00:42.774 ***** 2026-01-07 01:03:38.109924 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.109929 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.109933 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.109937 | orchestrator | 2026-01-07 01:03:38.109942 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-07 01:03:38.109946 | orchestrator | Wednesday 07 January 2026 01:01:41 +0000 (0:00:00.302) 0:00:43.076 ***** 2026-01-07 01:03:38.109950 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.109955 | orchestrator | 2026-01-07 01:03:38.109959 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-07 01:03:38.109963 | orchestrator | Wednesday 07 January 2026 01:01:44 +0000 (0:00:02.493) 0:00:45.570 ***** 2026-01-07 01:03:38.109971 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.109975 | orchestrator | 2026-01-07 01:03:38.109980 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-07 01:03:38.109984 | orchestrator | Wednesday 07 January 2026 01:01:46 +0000 (0:00:02.557) 0:00:48.127 ***** 2026-01-07 01:03:38.109988 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.109993 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:38.109997 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:38.110002 | orchestrator | 2026-01-07 01:03:38.110006 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-07 01:03:38.110010 | orchestrator | Wednesday 07 January 2026 01:01:47 +0000 (0:00:00.936) 0:00:49.064 ***** 2026-01-07 01:03:38.110069 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.110073 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:38.110078 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:38.110082 | orchestrator | 2026-01-07 01:03:38.110086 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-07 01:03:38.110090 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.570) 0:00:49.635 ***** 2026-01-07 01:03:38.110095 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110099 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.110103 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.110107 | orchestrator | 2026-01-07 01:03:38.110111 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-07 01:03:38.110116 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.332) 0:00:49.968 ***** 2026-01-07 01:03:38.110120 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110125 | orchestrator | 2026-01-07 01:03:38.110129 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-07 01:03:38.110133 | orchestrator | Wednesday 07 January 2026 01:02:02 +0000 (0:00:13.725) 0:01:03.693 ***** 2026-01-07 01:03:38.110137 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110141 | orchestrator | 2026-01-07 01:03:38.110145 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:03:38.110150 | orchestrator | Wednesday 07 January 2026 01:02:14 +0000 (0:00:11.644) 0:01:15.338 ***** 2026-01-07 01:03:38.110154 | orchestrator | 2026-01-07 01:03:38.110158 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:03:38.110163 | orchestrator | Wednesday 07 January 2026 01:02:14 +0000 (0:00:00.068) 0:01:15.406 ***** 2026-01-07 01:03:38.110167 | orchestrator | 2026-01-07 01:03:38.110171 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:03:38.110175 | orchestrator | Wednesday 07 January 2026 01:02:14 +0000 (0:00:00.065) 0:01:15.472 ***** 2026-01-07 01:03:38.110179 | orchestrator | 2026-01-07 01:03:38.110184 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-07 01:03:38.110188 | orchestrator | Wednesday 07 January 2026 01:02:14 +0000 (0:00:00.075) 0:01:15.547 ***** 2026-01-07 01:03:38.110193 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110197 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:38.110202 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:38.110206 | orchestrator | 2026-01-07 01:03:38.110211 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-07 01:03:38.110215 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:09.611) 0:01:25.158 ***** 2026-01-07 01:03:38.110219 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:38.110223 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:38.110228 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110232 | orchestrator | 2026-01-07 01:03:38.110236 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-07 01:03:38.110240 | orchestrator | Wednesday 07 January 2026 01:02:31 +0000 (0:00:07.459) 0:01:32.618 ***** 2026-01-07 01:03:38.110244 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:38.110249 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:38.110307 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110312 | orchestrator | 2026-01-07 01:03:38.110316 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.110321 | orchestrator | Wednesday 07 January 2026 01:02:39 +0000 (0:00:07.852) 0:01:40.471 ***** 2026-01-07 01:03:38.110325 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:03:38.110329 | orchestrator | 2026-01-07 01:03:38.110333 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-07 01:03:38.110338 | orchestrator | Wednesday 07 January 2026 01:02:39 +0000 (0:00:00.734) 0:01:41.205 ***** 2026-01-07 01:03:38.110342 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:38.110347 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.110351 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:38.110356 | orchestrator | 2026-01-07 01:03:38.110360 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-07 01:03:38.110364 | orchestrator | Wednesday 07 January 2026 01:02:40 +0000 (0:00:00.689) 0:01:41.895 ***** 2026-01-07 01:03:38.110369 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:38.110373 | orchestrator | 2026-01-07 01:03:38.110377 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-07 01:03:38.110382 | orchestrator | Wednesday 07 January 2026 01:02:42 +0000 (0:00:01.473) 0:01:43.368 ***** 2026-01-07 01:03:38.110391 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-07 01:03:38.110396 | orchestrator | 2026-01-07 01:03:38.110401 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-07 01:03:38.110408 | orchestrator | Wednesday 07 January 2026 01:02:54 +0000 (0:00:12.448) 0:01:55.816 ***** 2026-01-07 01:03:38.110412 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-07 01:03:38.110453 | orchestrator | 2026-01-07 01:03:38.110459 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-07 01:03:38.110464 | orchestrator | Wednesday 07 January 2026 01:03:22 +0000 (0:00:28.105) 0:02:23.922 ***** 2026-01-07 01:03:38.110469 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-07 01:03:38.110473 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-07 01:03:38.110478 | orchestrator | 2026-01-07 01:03:38.110482 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-07 01:03:38.110487 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:07.907) 0:02:31.830 ***** 2026-01-07 01:03:38.110491 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110495 | orchestrator | 2026-01-07 01:03:38.110500 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-07 01:03:38.110504 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:00.134) 0:02:31.965 ***** 2026-01-07 01:03:38.110508 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110512 | orchestrator | 2026-01-07 01:03:38.110516 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-07 01:03:38.110521 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:00.115) 0:02:32.081 ***** 2026-01-07 01:03:38.110525 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110529 | orchestrator | 2026-01-07 01:03:38.110577 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-07 01:03:38.110582 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:00.121) 0:02:32.202 ***** 2026-01-07 01:03:38.110587 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110591 | orchestrator | 2026-01-07 01:03:38.110596 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-07 01:03:38.110600 | orchestrator | Wednesday 07 January 2026 01:03:31 +0000 (0:00:00.530) 0:02:32.733 ***** 2026-01-07 01:03:38.110605 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:38.110609 | orchestrator | 2026-01-07 01:03:38.110613 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:03:38.110623 | orchestrator | Wednesday 07 January 2026 01:03:35 +0000 (0:00:03.620) 0:02:36.354 ***** 2026-01-07 01:03:38.110628 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:38.110632 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:38.110636 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:38.110640 | orchestrator | 2026-01-07 01:03:38.110645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:03:38.110651 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 01:03:38.110657 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:03:38.110661 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:03:38.110666 | orchestrator | 2026-01-07 01:03:38.110670 | orchestrator | 2026-01-07 01:03:38.110674 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:03:38.110679 | orchestrator | Wednesday 07 January 2026 01:03:35 +0000 (0:00:00.494) 0:02:36.848 ***** 2026-01-07 01:03:38.110683 | orchestrator | =============================================================================== 2026-01-07 01:03:38.110688 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.11s 2026-01-07 01:03:38.110692 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.73s 2026-01-07 01:03:38.110697 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.45s 2026-01-07 01:03:38.110701 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.64s 2026-01-07 01:03:38.110706 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.61s 2026-01-07 01:03:38.110710 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.66s 2026-01-07 01:03:38.110715 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.91s 2026-01-07 01:03:38.110719 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.85s 2026-01-07 01:03:38.110723 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.46s 2026-01-07 01:03:38.110728 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.29s 2026-01-07 01:03:38.110732 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.69s 2026-01-07 01:03:38.110736 | orchestrator | keystone : Creating default user role ----------------------------------- 3.62s 2026-01-07 01:03:38.110741 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.60s 2026-01-07 01:03:38.110745 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.28s 2026-01-07 01:03:38.110749 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.79s 2026-01-07 01:03:38.110755 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2026-01-07 01:03:38.110762 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-01-07 01:03:38.110775 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.60s 2026-01-07 01:03:38.110782 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.53s 2026-01-07 01:03:38.110797 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.47s 2026-01-07 01:03:38.110805 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:38.111435 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:38.112633 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task 97480df9-abd2-4067-bd1f-8c997959261b is in state SUCCESS 2026-01-07 01:03:38.115324 | orchestrator | 2026-01-07 01:03:38 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:38.115373 | orchestrator | 2026-01-07 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:41.144151 | orchestrator | 2026-01-07 01:03:41 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:41.145615 | orchestrator | 2026-01-07 01:03:41 | INFO  | Task e5b45a27-f8aa-4972-93a1-7672ac840f36 is in state STARTED 2026-01-07 01:03:41.146142 | orchestrator | 2026-01-07 01:03:41 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:41.146881 | orchestrator | 2026-01-07 01:03:41 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:41.148083 | orchestrator | 2026-01-07 01:03:41 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:41.148132 | orchestrator | 2026-01-07 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:44.262009 | orchestrator | 2026-01-07 01:03:44 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:44.262091 | orchestrator | 2026-01-07 01:03:44 | INFO  | Task e5b45a27-f8aa-4972-93a1-7672ac840f36 is in state SUCCESS 2026-01-07 01:03:44.262097 | orchestrator | 2026-01-07 01:03:44 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:44.262101 | orchestrator | 2026-01-07 01:03:44 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:44.262105 | orchestrator | 2026-01-07 01:03:44 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:44.262110 | orchestrator | 2026-01-07 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:47.219818 | orchestrator | 2026-01-07 01:03:47 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:47.219945 | orchestrator | 2026-01-07 01:03:47 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:47.220442 | orchestrator | 2026-01-07 01:03:47 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:47.222107 | orchestrator | 2026-01-07 01:03:47 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:03:47.223023 | orchestrator | 2026-01-07 01:03:47 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:47.223153 | orchestrator | 2026-01-07 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:50.250003 | orchestrator | 2026-01-07 01:03:50 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:50.250109 | orchestrator | 2026-01-07 01:03:50 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:50.250618 | orchestrator | 2026-01-07 01:03:50 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:50.252709 | orchestrator | 2026-01-07 01:03:50 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:03:50.252756 | orchestrator | 2026-01-07 01:03:50 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:50.252764 | orchestrator | 2026-01-07 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:53.289749 | orchestrator | 2026-01-07 01:03:53 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:53.292559 | orchestrator | 2026-01-07 01:03:53 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:53.296719 | orchestrator | 2026-01-07 01:03:53 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:53.298854 | orchestrator | 2026-01-07 01:03:53 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:03:53.302095 | orchestrator | 2026-01-07 01:03:53 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:53.302373 | orchestrator | 2026-01-07 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:56.343284 | orchestrator | 2026-01-07 01:03:56 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:56.344277 | orchestrator | 2026-01-07 01:03:56 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:56.345309 | orchestrator | 2026-01-07 01:03:56 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:56.347605 | orchestrator | 2026-01-07 01:03:56 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:03:56.348150 | orchestrator | 2026-01-07 01:03:56 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:56.348181 | orchestrator | 2026-01-07 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:59.395716 | orchestrator | 2026-01-07 01:03:59 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:03:59.396212 | orchestrator | 2026-01-07 01:03:59 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:03:59.397246 | orchestrator | 2026-01-07 01:03:59 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:03:59.398285 | orchestrator | 2026-01-07 01:03:59 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:03:59.399464 | orchestrator | 2026-01-07 01:03:59 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:03:59.399503 | orchestrator | 2026-01-07 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:02.456432 | orchestrator | 2026-01-07 01:04:02 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:02.459178 | orchestrator | 2026-01-07 01:04:02 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:02.461760 | orchestrator | 2026-01-07 01:04:02 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:02.463711 | orchestrator | 2026-01-07 01:04:02 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:02.467972 | orchestrator | 2026-01-07 01:04:02 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:02.468034 | orchestrator | 2026-01-07 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:05.524679 | orchestrator | 2026-01-07 01:04:05 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:05.524729 | orchestrator | 2026-01-07 01:04:05 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:05.528895 | orchestrator | 2026-01-07 01:04:05 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:05.529026 | orchestrator | 2026-01-07 01:04:05 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:05.531443 | orchestrator | 2026-01-07 01:04:05 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:05.531489 | orchestrator | 2026-01-07 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:08.585707 | orchestrator | 2026-01-07 01:04:08 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:08.741177 | orchestrator | 2026-01-07 01:04:08 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:08.741228 | orchestrator | 2026-01-07 01:04:08 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:08.741233 | orchestrator | 2026-01-07 01:04:08 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:08.741237 | orchestrator | 2026-01-07 01:04:08 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:08.741242 | orchestrator | 2026-01-07 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:11.622901 | orchestrator | 2026-01-07 01:04:11 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:11.623010 | orchestrator | 2026-01-07 01:04:11 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:11.623720 | orchestrator | 2026-01-07 01:04:11 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:11.624259 | orchestrator | 2026-01-07 01:04:11 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:11.625075 | orchestrator | 2026-01-07 01:04:11 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:11.625118 | orchestrator | 2026-01-07 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:14.654244 | orchestrator | 2026-01-07 01:04:14 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:14.654508 | orchestrator | 2026-01-07 01:04:14 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:14.655047 | orchestrator | 2026-01-07 01:04:14 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:14.655921 | orchestrator | 2026-01-07 01:04:14 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:14.656503 | orchestrator | 2026-01-07 01:04:14 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:14.656547 | orchestrator | 2026-01-07 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:17.686424 | orchestrator | 2026-01-07 01:04:17 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:17.687985 | orchestrator | 2026-01-07 01:04:17 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:17.689630 | orchestrator | 2026-01-07 01:04:17 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:17.691412 | orchestrator | 2026-01-07 01:04:17 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:17.693618 | orchestrator | 2026-01-07 01:04:17 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:17.693679 | orchestrator | 2026-01-07 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:20.730271 | orchestrator | 2026-01-07 01:04:20 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:20.731148 | orchestrator | 2026-01-07 01:04:20 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:20.732104 | orchestrator | 2026-01-07 01:04:20 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:20.732999 | orchestrator | 2026-01-07 01:04:20 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:20.733952 | orchestrator | 2026-01-07 01:04:20 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:20.734002 | orchestrator | 2026-01-07 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:23.766878 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:23.767701 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:23.770999 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:23.771764 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:23.772530 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:23.772558 | orchestrator | 2026-01-07 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:26.800562 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:26.802478 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:26.803313 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:26.804210 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:26.808005 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:26.808061 | orchestrator | 2026-01-07 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:29.842386 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:29.843823 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:29.844665 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:29.845346 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:29.847300 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:29.847336 | orchestrator | 2026-01-07 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:32.869794 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:32.869871 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:32.871151 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:32.871768 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:32.872353 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:32.872379 | orchestrator | 2026-01-07 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:35.902078 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:35.903100 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:35.903519 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:35.904333 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:35.904932 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:35.904956 | orchestrator | 2026-01-07 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:38.928972 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:38.929022 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:38.929865 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:38.930577 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:38.931305 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:38.931321 | orchestrator | 2026-01-07 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:41.995767 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:41.996739 | orchestrator | 2026-01-07 01:04:42 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:42.004813 | orchestrator | 2026-01-07 01:04:42 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:42.004881 | orchestrator | 2026-01-07 01:04:42 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:42.012926 | orchestrator | 2026-01-07 01:04:42 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:42.013001 | orchestrator | 2026-01-07 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:45.034489 | orchestrator | 2026-01-07 01:04:45 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:45.034810 | orchestrator | 2026-01-07 01:04:45 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:45.035556 | orchestrator | 2026-01-07 01:04:45 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:45.036070 | orchestrator | 2026-01-07 01:04:45 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:45.036847 | orchestrator | 2026-01-07 01:04:45 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:45.036875 | orchestrator | 2026-01-07 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:48.066270 | orchestrator | 2026-01-07 01:04:48 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:48.066366 | orchestrator | 2026-01-07 01:04:48 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:48.066376 | orchestrator | 2026-01-07 01:04:48 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:48.068471 | orchestrator | 2026-01-07 01:04:48 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:48.068528 | orchestrator | 2026-01-07 01:04:48 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:48.068534 | orchestrator | 2026-01-07 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:51.089087 | orchestrator | 2026-01-07 01:04:51 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:51.089226 | orchestrator | 2026-01-07 01:04:51 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:51.091899 | orchestrator | 2026-01-07 01:04:51 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:51.092194 | orchestrator | 2026-01-07 01:04:51 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:51.093141 | orchestrator | 2026-01-07 01:04:51 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:51.093179 | orchestrator | 2026-01-07 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:54.131309 | orchestrator | 2026-01-07 01:04:54 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:54.131408 | orchestrator | 2026-01-07 01:04:54 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:54.131420 | orchestrator | 2026-01-07 01:04:54 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:54.132042 | orchestrator | 2026-01-07 01:04:54 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:54.133044 | orchestrator | 2026-01-07 01:04:54 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:54.133084 | orchestrator | 2026-01-07 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:57.161588 | orchestrator | 2026-01-07 01:04:57 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:04:57.162652 | orchestrator | 2026-01-07 01:04:57 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:04:57.162769 | orchestrator | 2026-01-07 01:04:57 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:04:57.163582 | orchestrator | 2026-01-07 01:04:57 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:04:57.163624 | orchestrator | 2026-01-07 01:04:57 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:04:57.163633 | orchestrator | 2026-01-07 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:00.198199 | orchestrator | 2026-01-07 01:05:00 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:00.199550 | orchestrator | 2026-01-07 01:05:00 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:00.200032 | orchestrator | 2026-01-07 01:05:00 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:00.200536 | orchestrator | 2026-01-07 01:05:00 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:00.201122 | orchestrator | 2026-01-07 01:05:00 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:05:00.201144 | orchestrator | 2026-01-07 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:03.228934 | orchestrator | 2026-01-07 01:05:03 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:03.229461 | orchestrator | 2026-01-07 01:05:03 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:03.230305 | orchestrator | 2026-01-07 01:05:03 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:03.231023 | orchestrator | 2026-01-07 01:05:03 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:03.231784 | orchestrator | 2026-01-07 01:05:03 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:05:03.231810 | orchestrator | 2026-01-07 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:06.264634 | orchestrator | 2026-01-07 01:05:06 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:06.265054 | orchestrator | 2026-01-07 01:05:06 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:06.265852 | orchestrator | 2026-01-07 01:05:06 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:06.267791 | orchestrator | 2026-01-07 01:05:06 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:06.268584 | orchestrator | 2026-01-07 01:05:06 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:05:06.268624 | orchestrator | 2026-01-07 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:09.290589 | orchestrator | 2026-01-07 01:05:09 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:09.290786 | orchestrator | 2026-01-07 01:05:09 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:09.291567 | orchestrator | 2026-01-07 01:05:09 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:09.292269 | orchestrator | 2026-01-07 01:05:09 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:09.293439 | orchestrator | 2026-01-07 01:05:09 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:05:09.293470 | orchestrator | 2026-01-07 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:12.328908 | orchestrator | 2026-01-07 01:05:12 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:12.328971 | orchestrator | 2026-01-07 01:05:12 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:12.328981 | orchestrator | 2026-01-07 01:05:12 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:12.328988 | orchestrator | 2026-01-07 01:05:12 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:12.328996 | orchestrator | 2026-01-07 01:05:12 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state STARTED 2026-01-07 01:05:12.329003 | orchestrator | 2026-01-07 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:15.355706 | orchestrator | 2026-01-07 01:05:15 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:15.356284 | orchestrator | 2026-01-07 01:05:15 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:15.356954 | orchestrator | 2026-01-07 01:05:15 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:15.357777 | orchestrator | 2026-01-07 01:05:15 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:15.358170 | orchestrator | 2026-01-07 01:05:15 | INFO  | Task 1a7d7d68-75ea-49f5-a943-c481011bedde is in state SUCCESS 2026-01-07 01:05:15.358535 | orchestrator | 2026-01-07 01:05:15.358553 | orchestrator | 2026-01-07 01:05:15.358559 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-07 01:05:15.358566 | orchestrator | 2026-01-07 01:05:15.358572 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-07 01:05:15.358578 | orchestrator | Wednesday 07 January 2026 01:02:38 +0000 (0:00:00.240) 0:00:00.240 ***** 2026-01-07 01:05:15.358585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-07 01:05:15.358591 | orchestrator | 2026-01-07 01:05:15.358597 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-07 01:05:15.358619 | orchestrator | Wednesday 07 January 2026 01:02:38 +0000 (0:00:00.217) 0:00:00.458 ***** 2026-01-07 01:05:15.358625 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-07 01:05:15.358631 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-07 01:05:15.358638 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-07 01:05:15.358643 | orchestrator | 2026-01-07 01:05:15.358649 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-07 01:05:15.358654 | orchestrator | Wednesday 07 January 2026 01:02:39 +0000 (0:00:01.266) 0:00:01.725 ***** 2026-01-07 01:05:15.358660 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-07 01:05:15.358667 | orchestrator | 2026-01-07 01:05:15.358673 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-07 01:05:15.358714 | orchestrator | Wednesday 07 January 2026 01:02:41 +0000 (0:00:01.399) 0:00:03.124 ***** 2026-01-07 01:05:15.358721 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.358727 | orchestrator | 2026-01-07 01:05:15.358734 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-07 01:05:15.358740 | orchestrator | Wednesday 07 January 2026 01:02:42 +0000 (0:00:00.962) 0:00:04.087 ***** 2026-01-07 01:05:15.358746 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.358752 | orchestrator | 2026-01-07 01:05:15.358759 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-07 01:05:15.358765 | orchestrator | Wednesday 07 January 2026 01:02:43 +0000 (0:00:00.960) 0:00:05.047 ***** 2026-01-07 01:05:15.358771 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-07 01:05:15.358778 | orchestrator | ok: [testbed-manager] 2026-01-07 01:05:15.358784 | orchestrator | 2026-01-07 01:05:15.358801 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-07 01:05:15.358808 | orchestrator | Wednesday 07 January 2026 01:03:25 +0000 (0:00:42.123) 0:00:47.171 ***** 2026-01-07 01:05:15.358815 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-07 01:05:15.358822 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-07 01:05:15.358829 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-07 01:05:15.358836 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:05:15.358843 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-07 01:05:15.358849 | orchestrator | 2026-01-07 01:05:15.358854 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-07 01:05:15.358860 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:04.130) 0:00:51.302 ***** 2026-01-07 01:05:15.358866 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-07 01:05:15.358871 | orchestrator | 2026-01-07 01:05:15.358877 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-07 01:05:15.358882 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:00.467) 0:00:51.769 ***** 2026-01-07 01:05:15.358887 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:15.358893 | orchestrator | 2026-01-07 01:05:15.358899 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-07 01:05:15.358905 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:00.132) 0:00:51.902 ***** 2026-01-07 01:05:15.358911 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:15.358917 | orchestrator | 2026-01-07 01:05:15.358923 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-07 01:05:15.358929 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:00.535) 0:00:52.437 ***** 2026-01-07 01:05:15.358935 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.358941 | orchestrator | 2026-01-07 01:05:15.358947 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-07 01:05:15.358953 | orchestrator | Wednesday 07 January 2026 01:03:31 +0000 (0:00:01.389) 0:00:53.826 ***** 2026-01-07 01:05:15.358967 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.358973 | orchestrator | 2026-01-07 01:05:15.358980 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-07 01:05:15.358984 | orchestrator | Wednesday 07 January 2026 01:03:32 +0000 (0:00:00.768) 0:00:54.595 ***** 2026-01-07 01:05:15.358988 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.358992 | orchestrator | 2026-01-07 01:05:15.358995 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-07 01:05:15.358999 | orchestrator | Wednesday 07 January 2026 01:03:33 +0000 (0:00:00.635) 0:00:55.230 ***** 2026-01-07 01:05:15.359003 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-07 01:05:15.359007 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-07 01:05:15.359011 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:05:15.359015 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-07 01:05:15.359018 | orchestrator | 2026-01-07 01:05:15.359022 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:15.359026 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 01:05:15.359030 | orchestrator | 2026-01-07 01:05:15.359034 | orchestrator | 2026-01-07 01:05:15.359046 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:15.359051 | orchestrator | Wednesday 07 January 2026 01:03:34 +0000 (0:00:01.537) 0:00:56.768 ***** 2026-01-07 01:05:15.359057 | orchestrator | =============================================================================== 2026-01-07 01:05:15.359063 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.12s 2026-01-07 01:05:15.359070 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.13s 2026-01-07 01:05:15.359075 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.54s 2026-01-07 01:05:15.359082 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.40s 2026-01-07 01:05:15.359087 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.39s 2026-01-07 01:05:15.359093 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2026-01-07 01:05:15.359099 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-01-07 01:05:15.359105 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-01-07 01:05:15.359111 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-01-07 01:05:15.359117 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-01-07 01:05:15.359123 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-01-07 01:05:15.359130 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-01-07 01:05:15.359136 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-01-07 01:05:15.359142 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-07 01:05:15.359149 | orchestrator | 2026-01-07 01:05:15.359155 | orchestrator | 2026-01-07 01:05:15.359162 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:05:15.359168 | orchestrator | 2026-01-07 01:05:15.359174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:05:15.359181 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.338) 0:00:00.338 ***** 2026-01-07 01:05:15.359185 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:15.359190 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:15.359197 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:15.359203 | orchestrator | 2026-01-07 01:05:15.359209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:05:15.359221 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.300) 0:00:00.639 ***** 2026-01-07 01:05:15.359233 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 01:05:15.359237 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 01:05:15.359241 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 01:05:15.359244 | orchestrator | 2026-01-07 01:05:15.359248 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-07 01:05:15.359252 | orchestrator | 2026-01-07 01:05:15.359256 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-07 01:05:15.359259 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.809) 0:00:01.448 ***** 2026-01-07 01:05:15.359263 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:15.359267 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:15.359271 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:15.359274 | orchestrator | 2026-01-07 01:05:15.359278 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:15.359282 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359287 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359291 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359294 | orchestrator | 2026-01-07 01:05:15.359298 | orchestrator | 2026-01-07 01:05:15.359302 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:15.359306 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.812) 0:00:02.261 ***** 2026-01-07 01:05:15.359310 | orchestrator | =============================================================================== 2026-01-07 01:05:15.359315 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.81s 2026-01-07 01:05:15.359321 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-07 01:05:15.359327 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-01-07 01:05:15.359334 | orchestrator | 2026-01-07 01:05:15.359339 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 01:05:15.359346 | orchestrator | 2.16.14 2026-01-07 01:05:15.359352 | orchestrator | 2026-01-07 01:05:15.359359 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-07 01:05:15.359366 | orchestrator | 2026-01-07 01:05:15.359372 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-07 01:05:15.359377 | orchestrator | Wednesday 07 January 2026 01:03:39 +0000 (0:00:00.338) 0:00:00.338 ***** 2026-01-07 01:05:15.359383 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359389 | orchestrator | 2026-01-07 01:05:15.359394 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-07 01:05:15.359401 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:02.346) 0:00:02.685 ***** 2026-01-07 01:05:15.359407 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359413 | orchestrator | 2026-01-07 01:05:15.359420 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-07 01:05:15.359426 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:00.898) 0:00:03.584 ***** 2026-01-07 01:05:15.359439 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359446 | orchestrator | 2026-01-07 01:05:15.359451 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-07 01:05:15.359458 | orchestrator | Wednesday 07 January 2026 01:03:44 +0000 (0:00:00.891) 0:00:04.475 ***** 2026-01-07 01:05:15.359464 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359471 | orchestrator | 2026-01-07 01:05:15.359478 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-07 01:05:15.359484 | orchestrator | Wednesday 07 January 2026 01:03:44 +0000 (0:00:00.936) 0:00:05.411 ***** 2026-01-07 01:05:15.359490 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359502 | orchestrator | 2026-01-07 01:05:15.359509 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-07 01:05:15.359515 | orchestrator | Wednesday 07 January 2026 01:03:45 +0000 (0:00:01.007) 0:00:06.419 ***** 2026-01-07 01:05:15.359522 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359528 | orchestrator | 2026-01-07 01:05:15.359535 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-07 01:05:15.359542 | orchestrator | Wednesday 07 January 2026 01:03:46 +0000 (0:00:00.867) 0:00:07.286 ***** 2026-01-07 01:05:15.359549 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359556 | orchestrator | 2026-01-07 01:05:15.359563 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-07 01:05:15.359570 | orchestrator | Wednesday 07 January 2026 01:03:47 +0000 (0:00:01.134) 0:00:08.421 ***** 2026-01-07 01:05:15.359577 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359584 | orchestrator | 2026-01-07 01:05:15.359592 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-07 01:05:15.359597 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:00.967) 0:00:09.389 ***** 2026-01-07 01:05:15.359601 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:15.359606 | orchestrator | 2026-01-07 01:05:15.359610 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-07 01:05:15.359615 | orchestrator | Wednesday 07 January 2026 01:04:50 +0000 (0:01:02.023) 0:01:11.412 ***** 2026-01-07 01:05:15.359621 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:15.359627 | orchestrator | 2026-01-07 01:05:15.359634 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:05:15.359640 | orchestrator | 2026-01-07 01:05:15.359647 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:05:15.359654 | orchestrator | Wednesday 07 January 2026 01:04:51 +0000 (0:00:00.126) 0:01:11.539 ***** 2026-01-07 01:05:15.359661 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:15.359667 | orchestrator | 2026-01-07 01:05:15.359692 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:05:15.359697 | orchestrator | 2026-01-07 01:05:15.359702 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:05:15.359709 | orchestrator | Wednesday 07 January 2026 01:05:02 +0000 (0:00:11.455) 0:01:22.994 ***** 2026-01-07 01:05:15.359716 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:15.359722 | orchestrator | 2026-01-07 01:05:15.359729 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:05:15.359736 | orchestrator | 2026-01-07 01:05:15.359743 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:05:15.359751 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:11.140) 0:01:34.135 ***** 2026-01-07 01:05:15.359757 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:15.359764 | orchestrator | 2026-01-07 01:05:15.359770 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:15.359777 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 01:05:15.359784 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359789 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359792 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:05:15.359797 | orchestrator | 2026-01-07 01:05:15.359800 | orchestrator | 2026-01-07 01:05:15.359804 | orchestrator | 2026-01-07 01:05:15.359809 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:15.359816 | orchestrator | Wednesday 07 January 2026 01:05:14 +0000 (0:00:01.042) 0:01:35.177 ***** 2026-01-07 01:05:15.359831 | orchestrator | =============================================================================== 2026-01-07 01:05:15.359835 | orchestrator | Create admin user ------------------------------------------------------ 62.02s 2026-01-07 01:05:15.359839 | orchestrator | Restart ceph manager service ------------------------------------------- 23.64s 2026-01-07 01:05:15.359843 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.35s 2026-01-07 01:05:15.359847 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.13s 2026-01-07 01:05:15.359851 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.01s 2026-01-07 01:05:15.359854 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.97s 2026-01-07 01:05:15.359858 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.94s 2026-01-07 01:05:15.359862 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2026-01-07 01:05:15.359866 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.89s 2026-01-07 01:05:15.359870 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2026-01-07 01:05:15.359877 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-01-07 01:05:15.359887 | orchestrator | 2026-01-07 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:18.395282 | orchestrator | 2026-01-07 01:05:18 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:18.395663 | orchestrator | 2026-01-07 01:05:18 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:18.396568 | orchestrator | 2026-01-07 01:05:18 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:18.398119 | orchestrator | 2026-01-07 01:05:18 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:18.398855 | orchestrator | 2026-01-07 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:21.422303 | orchestrator | 2026-01-07 01:05:21 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:21.422906 | orchestrator | 2026-01-07 01:05:21 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:21.423678 | orchestrator | 2026-01-07 01:05:21 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:21.424445 | orchestrator | 2026-01-07 01:05:21 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:21.424472 | orchestrator | 2026-01-07 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:24.459579 | orchestrator | 2026-01-07 01:05:24 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:24.459632 | orchestrator | 2026-01-07 01:05:24 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:24.459638 | orchestrator | 2026-01-07 01:05:24 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:24.459642 | orchestrator | 2026-01-07 01:05:24 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:24.459655 | orchestrator | 2026-01-07 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:27.488708 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:27.489869 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:27.490756 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:27.491604 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:27.491630 | orchestrator | 2026-01-07 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:30.519354 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:30.520319 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:30.524917 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:30.525454 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:30.525492 | orchestrator | 2026-01-07 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:33.552921 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:33.553432 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:33.554143 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:33.554985 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:33.555138 | orchestrator | 2026-01-07 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:36.580561 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:36.580683 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:36.581368 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:36.581999 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:36.582056 | orchestrator | 2026-01-07 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:39.608117 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:39.608476 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:39.609129 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:39.609665 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:39.609690 | orchestrator | 2026-01-07 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:42.637119 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:42.639173 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state STARTED 2026-01-07 01:05:42.642534 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:42.643948 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:42.644005 | orchestrator | 2026-01-07 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:45.686672 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:45.691325 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task d539f3d9-bb3f-48b4-af68-7828f49ff8cc is in state SUCCESS 2026-01-07 01:05:45.693750 | orchestrator | 2026-01-07 01:05:45.693793 | orchestrator | 2026-01-07 01:05:45.693798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:05:45.693804 | orchestrator | 2026-01-07 01:05:45.693809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:05:45.693814 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.239) 0:00:00.239 ***** 2026-01-07 01:05:45.693819 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:45.693825 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:45.693839 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:45.693844 | orchestrator | 2026-01-07 01:05:45.693849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:05:45.693854 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.286) 0:00:00.525 ***** 2026-01-07 01:05:45.693858 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-07 01:05:45.693864 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-07 01:05:45.693868 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-07 01:05:45.693873 | orchestrator | 2026-01-07 01:05:45.693878 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-07 01:05:45.693883 | orchestrator | 2026-01-07 01:05:45.693887 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:05:45.693892 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.736) 0:00:01.262 ***** 2026-01-07 01:05:45.693897 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:45.693903 | orchestrator | 2026-01-07 01:05:45.693907 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-07 01:05:45.693912 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:00.582) 0:00:01.844 ***** 2026-01-07 01:05:45.693918 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-07 01:05:45.693922 | orchestrator | 2026-01-07 01:05:45.693966 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-07 01:05:45.693971 | orchestrator | Wednesday 07 January 2026 01:03:47 +0000 (0:00:04.140) 0:00:05.984 ***** 2026-01-07 01:05:45.693976 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-07 01:05:45.693981 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-07 01:05:45.693986 | orchestrator | 2026-01-07 01:05:45.693991 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-07 01:05:45.693995 | orchestrator | Wednesday 07 January 2026 01:03:53 +0000 (0:00:06.134) 0:00:12.119 ***** 2026-01-07 01:05:45.694000 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-07 01:05:45.694005 | orchestrator | 2026-01-07 01:05:45.694010 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-07 01:05:45.694072 | orchestrator | Wednesday 07 January 2026 01:03:56 +0000 (0:00:03.237) 0:00:15.357 ***** 2026-01-07 01:05:45.694146 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:05:45.694152 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-07 01:05:45.694157 | orchestrator | 2026-01-07 01:05:45.694162 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-07 01:05:45.694166 | orchestrator | Wednesday 07 January 2026 01:04:01 +0000 (0:00:04.501) 0:00:19.858 ***** 2026-01-07 01:05:45.694171 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:05:45.694176 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-07 01:05:45.694181 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-07 01:05:45.694186 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-07 01:05:45.694191 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-07 01:05:45.694206 | orchestrator | 2026-01-07 01:05:45.694211 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-07 01:05:45.694216 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:17.892) 0:00:37.751 ***** 2026-01-07 01:05:45.694221 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-07 01:05:45.694225 | orchestrator | 2026-01-07 01:05:45.694230 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-07 01:05:45.694235 | orchestrator | Wednesday 07 January 2026 01:04:23 +0000 (0:00:03.969) 0:00:41.721 ***** 2026-01-07 01:05:45.694242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694315 | orchestrator | 2026-01-07 01:05:45.694320 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-07 01:05:45.694325 | orchestrator | Wednesday 07 January 2026 01:04:25 +0000 (0:00:01.849) 0:00:43.571 ***** 2026-01-07 01:05:45.694330 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-07 01:05:45.694335 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-07 01:05:45.694340 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-07 01:05:45.694345 | orchestrator | 2026-01-07 01:05:45.694351 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-07 01:05:45.694356 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:01.171) 0:00:44.742 ***** 2026-01-07 01:05:45.694361 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.694369 | orchestrator | 2026-01-07 01:05:45.694375 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-07 01:05:45.694380 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:00.402) 0:00:45.144 ***** 2026-01-07 01:05:45.694385 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.694390 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.694395 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.694401 | orchestrator | 2026-01-07 01:05:45.694406 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:05:45.694412 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:01.457) 0:00:46.602 ***** 2026-01-07 01:05:45.694417 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:45.694422 | orchestrator | 2026-01-07 01:05:45.694427 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-07 01:05:45.694432 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:00.908) 0:00:47.510 ***** 2026-01-07 01:05:45.694438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694499 | orchestrator | 2026-01-07 01:05:45.694504 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-07 01:05:45.694509 | orchestrator | Wednesday 07 January 2026 01:04:34 +0000 (0:00:05.116) 0:00:52.626 ***** 2026-01-07 01:05:45.694514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694533 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.694541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694561 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.694566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694581 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.694586 | orchestrator | 2026-01-07 01:05:45.694590 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-07 01:05:45.694595 | orchestrator | Wednesday 07 January 2026 01:04:35 +0000 (0:00:01.320) 0:00:53.947 ***** 2026-01-07 01:05:45.694609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694629 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.694634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694650 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.694660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.694669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.694679 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.694684 | orchestrator | 2026-01-07 01:05:45.694689 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-07 01:05:45.694694 | orchestrator | Wednesday 07 January 2026 01:04:36 +0000 (0:00:01.457) 0:00:55.405 ***** 2026-01-07 01:05:45.694699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.694870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.694911 | orchestrator | 2026-01-07 01:05:45.694916 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-07 01:05:45.694921 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:03.942) 0:00:59.347 ***** 2026-01-07 01:05:45.694943 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.694949 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:45.694954 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:45.694959 | orchestrator | 2026-01-07 01:05:45.694964 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-07 01:05:45.694968 | orchestrator | Wednesday 07 January 2026 01:04:44 +0000 (0:00:03.713) 0:01:03.060 ***** 2026-01-07 01:05:45.694973 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:05:45.694978 | orchestrator | 2026-01-07 01:05:45.694983 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-07 01:05:45.694987 | orchestrator | Wednesday 07 January 2026 01:04:46 +0000 (0:00:01.728) 0:01:04.789 ***** 2026-01-07 01:05:45.694992 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.694997 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.695002 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.695007 | orchestrator | 2026-01-07 01:05:45.695011 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-07 01:05:45.695016 | orchestrator | Wednesday 07 January 2026 01:04:46 +0000 (0:00:00.592) 0:01:05.382 ***** 2026-01-07 01:05:45.695021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695080 | orchestrator | 2026-01-07 01:05:45.695085 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-07 01:05:45.695092 | orchestrator | Wednesday 07 January 2026 01:04:57 +0000 (0:00:10.940) 0:01:16.322 ***** 2026-01-07 01:05:45.695100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.695105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.695120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.695128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695141 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.695148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:05:45.695153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:45.695163 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.695168 | orchestrator | 2026-01-07 01:05:45.695173 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-07 01:05:45.695178 | orchestrator | Wednesday 07 January 2026 01:04:58 +0000 (0:00:00.528) 0:01:16.851 ***** 2026-01-07 01:05:45.695183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:45.695210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:45.695250 | orchestrator | 2026-01-07 01:05:45.695254 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:05:45.695259 | orchestrator | Wednesday 07 January 2026 01:05:02 +0000 (0:00:04.066) 0:01:20.917 ***** 2026-01-07 01:05:45.695264 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:45.695269 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:45.695274 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:45.695279 | orchestrator | 2026-01-07 01:05:45.695283 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-07 01:05:45.695288 | orchestrator | Wednesday 07 January 2026 01:05:02 +0000 (0:00:00.517) 0:01:21.435 ***** 2026-01-07 01:05:45.695293 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695298 | orchestrator | 2026-01-07 01:05:45.695302 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-07 01:05:45.695307 | orchestrator | Wednesday 07 January 2026 01:05:05 +0000 (0:00:02.173) 0:01:23.608 ***** 2026-01-07 01:05:45.695312 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695317 | orchestrator | 2026-01-07 01:05:45.695321 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-07 01:05:45.695326 | orchestrator | Wednesday 07 January 2026 01:05:07 +0000 (0:00:02.002) 0:01:25.611 ***** 2026-01-07 01:05:45.695331 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695336 | orchestrator | 2026-01-07 01:05:45.695340 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:05:45.695348 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:11.008) 0:01:36.619 ***** 2026-01-07 01:05:45.695353 | orchestrator | 2026-01-07 01:05:45.695358 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:05:45.695362 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.162) 0:01:36.782 ***** 2026-01-07 01:05:45.695367 | orchestrator | 2026-01-07 01:05:45.695372 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:05:45.695376 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.103) 0:01:36.885 ***** 2026-01-07 01:05:45.695381 | orchestrator | 2026-01-07 01:05:45.695386 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-07 01:05:45.695391 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.062) 0:01:36.947 ***** 2026-01-07 01:05:45.695396 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695401 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:45.695405 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:45.695410 | orchestrator | 2026-01-07 01:05:45.695415 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-07 01:05:45.695419 | orchestrator | Wednesday 07 January 2026 01:05:27 +0000 (0:00:08.716) 0:01:45.664 ***** 2026-01-07 01:05:45.695424 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695429 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:45.695434 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:45.695438 | orchestrator | 2026-01-07 01:05:45.695443 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-07 01:05:45.695448 | orchestrator | Wednesday 07 January 2026 01:05:33 +0000 (0:00:06.166) 0:01:51.831 ***** 2026-01-07 01:05:45.695453 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:45.695457 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:45.695462 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:45.695467 | orchestrator | 2026-01-07 01:05:45.695472 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:45.695477 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:05:45.695483 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:05:45.695487 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:05:45.695493 | orchestrator | 2026-01-07 01:05:45.695498 | orchestrator | 2026-01-07 01:05:45.695503 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:45.695508 | orchestrator | Wednesday 07 January 2026 01:05:43 +0000 (0:00:10.398) 0:02:02.230 ***** 2026-01-07 01:05:45.695514 | orchestrator | =============================================================================== 2026-01-07 01:05:45.695519 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.89s 2026-01-07 01:05:45.695526 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.01s 2026-01-07 01:05:45.695536 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.94s 2026-01-07 01:05:45.695542 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.40s 2026-01-07 01:05:45.695547 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.72s 2026-01-07 01:05:45.695554 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.17s 2026-01-07 01:05:45.695559 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.13s 2026-01-07 01:05:45.695564 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.12s 2026-01-07 01:05:45.695569 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.50s 2026-01-07 01:05:45.695575 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.14s 2026-01-07 01:05:45.695583 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.07s 2026-01-07 01:05:45.695588 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.97s 2026-01-07 01:05:45.695593 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.94s 2026-01-07 01:05:45.695598 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.72s 2026-01-07 01:05:45.695603 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.24s 2026-01-07 01:05:45.695609 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.17s 2026-01-07 01:05:45.695614 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.00s 2026-01-07 01:05:45.695619 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.85s 2026-01-07 01:05:45.695624 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.73s 2026-01-07 01:05:45.695629 | orchestrator | barbican : Set barbican policy file ------------------------------------- 1.46s 2026-01-07 01:05:45.695635 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:45.696627 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:45.698176 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:05:45.698193 | orchestrator | 2026-01-07 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:48.730299 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:48.731542 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:48.733744 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:48.736000 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:05:48.736041 | orchestrator | 2026-01-07 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:51.776690 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:51.777738 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:51.778573 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:51.779611 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:05:51.780015 | orchestrator | 2026-01-07 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:54.814719 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:54.818283 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:54.820757 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:54.822993 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:05:54.823065 | orchestrator | 2026-01-07 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:57.912193 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:05:57.912259 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:05:57.912287 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:05:57.912296 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:05:57.912304 | orchestrator | 2026-01-07 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:00.940714 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:00.941276 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:00.942276 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:00.942985 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:00.943008 | orchestrator | 2026-01-07 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:03.975114 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:03.975194 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:03.976705 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:03.978089 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:03.978148 | orchestrator | 2026-01-07 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:07.007873 | orchestrator | 2026-01-07 01:06:07 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:07.011910 | orchestrator | 2026-01-07 01:06:07 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:07.020374 | orchestrator | 2026-01-07 01:06:07 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:07.028333 | orchestrator | 2026-01-07 01:06:07 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:07.028381 | orchestrator | 2026-01-07 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:10.049648 | orchestrator | 2026-01-07 01:06:10 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:10.049972 | orchestrator | 2026-01-07 01:06:10 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:10.050521 | orchestrator | 2026-01-07 01:06:10 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:10.051129 | orchestrator | 2026-01-07 01:06:10 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:10.051167 | orchestrator | 2026-01-07 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:13.107228 | orchestrator | 2026-01-07 01:06:13 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:13.107310 | orchestrator | 2026-01-07 01:06:13 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:13.107322 | orchestrator | 2026-01-07 01:06:13 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:13.107329 | orchestrator | 2026-01-07 01:06:13 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:13.107346 | orchestrator | 2026-01-07 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:16.172129 | orchestrator | 2026-01-07 01:06:16 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:16.173051 | orchestrator | 2026-01-07 01:06:16 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:16.173583 | orchestrator | 2026-01-07 01:06:16 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:16.174466 | orchestrator | 2026-01-07 01:06:16 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:16.174641 | orchestrator | 2026-01-07 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:19.222329 | orchestrator | 2026-01-07 01:06:19 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:19.223549 | orchestrator | 2026-01-07 01:06:19 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:19.225372 | orchestrator | 2026-01-07 01:06:19 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:19.227040 | orchestrator | 2026-01-07 01:06:19 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:19.227095 | orchestrator | 2026-01-07 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:22.315803 | orchestrator | 2026-01-07 01:06:22 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:22.315926 | orchestrator | 2026-01-07 01:06:22 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:22.318062 | orchestrator | 2026-01-07 01:06:22 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:22.318900 | orchestrator | 2026-01-07 01:06:22 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:22.318919 | orchestrator | 2026-01-07 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:25.384002 | orchestrator | 2026-01-07 01:06:25 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:25.387698 | orchestrator | 2026-01-07 01:06:25 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:25.390468 | orchestrator | 2026-01-07 01:06:25 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:25.393636 | orchestrator | 2026-01-07 01:06:25 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:25.393695 | orchestrator | 2026-01-07 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:28.433652 | orchestrator | 2026-01-07 01:06:28 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:28.434065 | orchestrator | 2026-01-07 01:06:28 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:28.435183 | orchestrator | 2026-01-07 01:06:28 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:28.435928 | orchestrator | 2026-01-07 01:06:28 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:28.435951 | orchestrator | 2026-01-07 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:31.484236 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:31.484327 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:31.484336 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:31.484343 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:31.484370 | orchestrator | 2026-01-07 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:34.511169 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:34.512857 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:34.514735 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:34.514872 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:34.515321 | orchestrator | 2026-01-07 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:37.559091 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:37.560568 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:37.562858 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:37.565634 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:37.565695 | orchestrator | 2026-01-07 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:40.612935 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:40.614684 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:40.616591 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:40.619268 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:40.619321 | orchestrator | 2026-01-07 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:43.680036 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:43.680859 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:43.681938 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:43.684763 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:43.684813 | orchestrator | 2026-01-07 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:46.726567 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:46.727799 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:46.729930 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:46.730657 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:46.730688 | orchestrator | 2026-01-07 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:49.792904 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:49.795683 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:49.798643 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:49.800934 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:49.800985 | orchestrator | 2026-01-07 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:52.838379 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:52.840364 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:52.842553 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:52.843556 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:52.843594 | orchestrator | 2026-01-07 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:55.885767 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:55.885846 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:55.886406 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:55.887564 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:55.887602 | orchestrator | 2026-01-07 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:58.948961 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:06:58.950433 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:06:58.952303 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:06:58.955372 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:06:58.955425 | orchestrator | 2026-01-07 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:02.006855 | orchestrator | 2026-01-07 01:07:02 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:07:02.010373 | orchestrator | 2026-01-07 01:07:02 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:02.011392 | orchestrator | 2026-01-07 01:07:02 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:07:02.013563 | orchestrator | 2026-01-07 01:07:02 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:02.013602 | orchestrator | 2026-01-07 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:05.071573 | orchestrator | 2026-01-07 01:07:05 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:07:05.072252 | orchestrator | 2026-01-07 01:07:05 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:05.073213 | orchestrator | 2026-01-07 01:07:05 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:07:05.074664 | orchestrator | 2026-01-07 01:07:05 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:05.074685 | orchestrator | 2026-01-07 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:08.122675 | orchestrator | 2026-01-07 01:07:08 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:07:08.125726 | orchestrator | 2026-01-07 01:07:08 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:08.128000 | orchestrator | 2026-01-07 01:07:08 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state STARTED 2026-01-07 01:07:08.128739 | orchestrator | 2026-01-07 01:07:08 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:08.128828 | orchestrator | 2026-01-07 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:11.186707 | orchestrator | 2026-01-07 01:07:11 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:07:11.188164 | orchestrator | 2026-01-07 01:07:11 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:11.190412 | orchestrator | 2026-01-07 01:07:11 | INFO  | Task ad844353-f3bf-4943-971d-36b749eae932 is in state SUCCESS 2026-01-07 01:07:11.192106 | orchestrator | 2026-01-07 01:07:11.192180 | orchestrator | 2026-01-07 01:07:11.192191 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:07:11.192200 | orchestrator | 2026-01-07 01:07:11.192208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:07:11.192215 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:00.328) 0:00:00.328 ***** 2026-01-07 01:07:11.192222 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:11.192230 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:07:11.192237 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:07:11.192243 | orchestrator | 2026-01-07 01:07:11.192250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:07:11.192256 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:00.359) 0:00:00.687 ***** 2026-01-07 01:07:11.192264 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-07 01:07:11.192271 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-07 01:07:11.192278 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-07 01:07:11.192284 | orchestrator | 2026-01-07 01:07:11.192291 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-07 01:07:11.192297 | orchestrator | 2026-01-07 01:07:11.192304 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:07:11.192310 | orchestrator | Wednesday 07 January 2026 01:03:49 +0000 (0:00:00.395) 0:00:01.082 ***** 2026-01-07 01:07:11.192317 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:07:11.192325 | orchestrator | 2026-01-07 01:07:11.192331 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-07 01:07:11.192337 | orchestrator | Wednesday 07 January 2026 01:03:49 +0000 (0:00:00.584) 0:00:01.667 ***** 2026-01-07 01:07:11.192344 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-07 01:07:11.192350 | orchestrator | 2026-01-07 01:07:11.192357 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-07 01:07:11.192364 | orchestrator | Wednesday 07 January 2026 01:03:52 +0000 (0:00:03.027) 0:00:04.695 ***** 2026-01-07 01:07:11.192370 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-07 01:07:11.192378 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-07 01:07:11.192384 | orchestrator | 2026-01-07 01:07:11.192391 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-07 01:07:11.192398 | orchestrator | Wednesday 07 January 2026 01:03:59 +0000 (0:00:07.148) 0:00:11.843 ***** 2026-01-07 01:07:11.192405 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:07:11.192412 | orchestrator | 2026-01-07 01:07:11.192418 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-07 01:07:11.192450 | orchestrator | Wednesday 07 January 2026 01:04:03 +0000 (0:00:03.710) 0:00:15.554 ***** 2026-01-07 01:07:11.192457 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:07:11.192465 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-07 01:07:11.192471 | orchestrator | 2026-01-07 01:07:11.192477 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-07 01:07:11.192484 | orchestrator | Wednesday 07 January 2026 01:04:07 +0000 (0:00:04.105) 0:00:19.660 ***** 2026-01-07 01:07:11.192490 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:07:11.192627 | orchestrator | 2026-01-07 01:07:11.192636 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-07 01:07:11.192643 | orchestrator | Wednesday 07 January 2026 01:04:11 +0000 (0:00:03.834) 0:00:23.494 ***** 2026-01-07 01:07:11.192650 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-07 01:07:11.192656 | orchestrator | 2026-01-07 01:07:11.192663 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-07 01:07:11.192670 | orchestrator | Wednesday 07 January 2026 01:04:16 +0000 (0:00:04.510) 0:00:28.004 ***** 2026-01-07 01:07:11.192695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193706 | orchestrator | 2026-01-07 01:07:11.193720 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-07 01:07:11.193728 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:03.317) 0:00:31.322 ***** 2026-01-07 01:07:11.193734 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.193742 | orchestrator | 2026-01-07 01:07:11.193748 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-07 01:07:11.193754 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:00.109) 0:00:31.431 ***** 2026-01-07 01:07:11.193760 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.193766 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.193771 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.193777 | orchestrator | 2026-01-07 01:07:11.193782 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:07:11.193788 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:00.251) 0:00:31.683 ***** 2026-01-07 01:07:11.193796 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:07:11.193801 | orchestrator | 2026-01-07 01:07:11.193816 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-07 01:07:11.193823 | orchestrator | Wednesday 07 January 2026 01:04:20 +0000 (0:00:00.629) 0:00:32.313 ***** 2026-01-07 01:07:11.193830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.193890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.193999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194100 | orchestrator | 2026-01-07 01:07:11.194106 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-07 01:07:11.194113 | orchestrator | Wednesday 07 January 2026 01:04:27 +0000 (0:00:06.650) 0:00:38.963 ***** 2026-01-07 01:07:11.194124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194197 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.194208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194271 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.194281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194344 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.194351 | orchestrator | 2026-01-07 01:07:11.194357 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-07 01:07:11.194362 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:01.588) 0:00:40.552 ***** 2026-01-07 01:07:11.194370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194480 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.194489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194684 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.194691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.194703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.194718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.194770 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.194777 | orchestrator | 2026-01-07 01:07:11.194784 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-07 01:07:11.194791 | orchestrator | Wednesday 07 January 2026 01:04:31 +0000 (0:00:03.061) 0:00:43.613 ***** 2026-01-07 01:07:11.194798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.194815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.194844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.194852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.194976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195023 | orchestrator | 2026-01-07 01:07:11.195029 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-07 01:07:11.195036 | orchestrator | Wednesday 07 January 2026 01:04:38 +0000 (0:00:07.117) 0:00:50.731 ***** 2026-01-07 01:07:11.195043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.195058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.195065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.195076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195207 | orchestrator | 2026-01-07 01:07:11.195214 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-07 01:07:11.195220 | orchestrator | Wednesday 07 January 2026 01:05:02 +0000 (0:00:24.032) 0:01:14.763 ***** 2026-01-07 01:07:11.195227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:07:11.195234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:07:11.195241 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:07:11.195247 | orchestrator | 2026-01-07 01:07:11.195254 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-07 01:07:11.195266 | orchestrator | Wednesday 07 January 2026 01:05:07 +0000 (0:00:04.725) 0:01:19.489 ***** 2026-01-07 01:07:11.195272 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:07:11.195279 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:07:11.195287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:07:11.195295 | orchestrator | 2026-01-07 01:07:11.195303 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-07 01:07:11.195311 | orchestrator | Wednesday 07 January 2026 01:05:10 +0000 (0:00:03.038) 0:01:22.527 ***** 2026-01-07 01:07:11.195318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195797 | orchestrator | 2026-01-07 01:07:11.195804 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-07 01:07:11.195810 | orchestrator | Wednesday 07 January 2026 01:05:14 +0000 (0:00:03.999) 0:01:26.526 ***** 2026-01-07 01:07:11.195818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.195856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.195967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.195997 | orchestrator | 2026-01-07 01:07:11.196009 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:07:11.196016 | orchestrator | Wednesday 07 January 2026 01:05:17 +0000 (0:00:02.969) 0:01:29.495 ***** 2026-01-07 01:07:11.196022 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.196030 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.196037 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.196044 | orchestrator | 2026-01-07 01:07:11.196050 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-07 01:07:11.196057 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.760) 0:01:30.255 ***** 2026-01-07 01:07:11.196064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.196072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.196085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196129 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.196135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.196141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.196151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196184 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.196191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:07:11.196199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:07:11.196205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:07:11.196246 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.196254 | orchestrator | 2026-01-07 01:07:11.196261 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-07 01:07:11.196268 | orchestrator | Wednesday 07 January 2026 01:05:20 +0000 (0:00:01.968) 0:01:32.223 ***** 2026-01-07 01:07:11.196275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.196282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.196293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:07:11.196304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:07:11.196457 | orchestrator | 2026-01-07 01:07:11.196464 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:07:11.196471 | orchestrator | Wednesday 07 January 2026 01:05:26 +0000 (0:00:06.016) 0:01:38.240 ***** 2026-01-07 01:07:11.196478 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:11.196485 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:11.196492 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:11.196499 | orchestrator | 2026-01-07 01:07:11.196506 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-07 01:07:11.196513 | orchestrator | Wednesday 07 January 2026 01:05:26 +0000 (0:00:00.555) 0:01:38.796 ***** 2026-01-07 01:07:11.196520 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-07 01:07:11.196527 | orchestrator | 2026-01-07 01:07:11.196534 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-07 01:07:11.196560 | orchestrator | Wednesday 07 January 2026 01:05:29 +0000 (0:00:02.370) 0:01:41.166 ***** 2026-01-07 01:07:11.196568 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:07:11.196575 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-07 01:07:11.196582 | orchestrator | 2026-01-07 01:07:11.196589 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-07 01:07:11.196595 | orchestrator | Wednesday 07 January 2026 01:05:32 +0000 (0:00:02.923) 0:01:44.089 ***** 2026-01-07 01:07:11.196602 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196609 | orchestrator | 2026-01-07 01:07:11.196616 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:07:11.196623 | orchestrator | Wednesday 07 January 2026 01:05:45 +0000 (0:00:13.734) 0:01:57.824 ***** 2026-01-07 01:07:11.196629 | orchestrator | 2026-01-07 01:07:11.196636 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:07:11.196643 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:00.423) 0:01:58.247 ***** 2026-01-07 01:07:11.196650 | orchestrator | 2026-01-07 01:07:11.196657 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:07:11.196664 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:00.128) 0:01:58.375 ***** 2026-01-07 01:07:11.196671 | orchestrator | 2026-01-07 01:07:11.196678 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-07 01:07:11.196691 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:00.165) 0:01:58.541 ***** 2026-01-07 01:07:11.196697 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196705 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196712 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196719 | orchestrator | 2026-01-07 01:07:11.196726 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-07 01:07:11.196732 | orchestrator | Wednesday 07 January 2026 01:06:02 +0000 (0:00:15.943) 0:02:14.484 ***** 2026-01-07 01:07:11.196739 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196746 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196753 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196760 | orchestrator | 2026-01-07 01:07:11.196767 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-07 01:07:11.196774 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:17.740) 0:02:32.225 ***** 2026-01-07 01:07:11.196780 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196787 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196794 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196801 | orchestrator | 2026-01-07 01:07:11.196808 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-07 01:07:11.196814 | orchestrator | Wednesday 07 January 2026 01:06:26 +0000 (0:00:06.334) 0:02:38.559 ***** 2026-01-07 01:07:11.196821 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196828 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196834 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196840 | orchestrator | 2026-01-07 01:07:11.196846 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-07 01:07:11.196854 | orchestrator | Wednesday 07 January 2026 01:06:40 +0000 (0:00:14.142) 0:02:52.702 ***** 2026-01-07 01:07:11.196861 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196867 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196879 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196885 | orchestrator | 2026-01-07 01:07:11.196892 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-07 01:07:11.196899 | orchestrator | Wednesday 07 January 2026 01:06:50 +0000 (0:00:10.223) 0:03:02.926 ***** 2026-01-07 01:07:11.196906 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196913 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:11.196920 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:11.196926 | orchestrator | 2026-01-07 01:07:11.196933 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-07 01:07:11.196940 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:10.694) 0:03:13.620 ***** 2026-01-07 01:07:11.196947 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:11.196953 | orchestrator | 2026-01-07 01:07:11.196960 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:07:11.196966 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:07:11.196974 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:07:11.196980 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:07:11.196986 | orchestrator | 2026-01-07 01:07:11.196991 | orchestrator | 2026-01-07 01:07:11.197003 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:07:11.197009 | orchestrator | Wednesday 07 January 2026 01:07:08 +0000 (0:00:07.142) 0:03:20.763 ***** 2026-01-07 01:07:11.197017 | orchestrator | =============================================================================== 2026-01-07 01:07:11.197023 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.03s 2026-01-07 01:07:11.197036 | orchestrator | designate : Restart designate-api container ---------------------------- 17.74s 2026-01-07 01:07:11.197043 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.94s 2026-01-07 01:07:11.197050 | orchestrator | designate : Restart designate-producer container ----------------------- 14.14s 2026-01-07 01:07:11.197057 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.73s 2026-01-07 01:07:11.197064 | orchestrator | designate : Restart designate-worker container ------------------------- 10.69s 2026-01-07 01:07:11.197070 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.22s 2026-01-07 01:07:11.197078 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.15s 2026-01-07 01:07:11.197085 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.14s 2026-01-07 01:07:11.197091 | orchestrator | designate : Copying over config.json files for services ----------------- 7.12s 2026-01-07 01:07:11.197096 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.65s 2026-01-07 01:07:11.197101 | orchestrator | designate : Restart designate-central container ------------------------- 6.33s 2026-01-07 01:07:11.197107 | orchestrator | designate : Check designate containers ---------------------------------- 6.02s 2026-01-07 01:07:11.197113 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.73s 2026-01-07 01:07:11.197118 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.51s 2026-01-07 01:07:11.197124 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.11s 2026-01-07 01:07:11.197129 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.00s 2026-01-07 01:07:11.197134 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.83s 2026-01-07 01:07:11.197140 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.71s 2026-01-07 01:07:11.197145 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.32s 2026-01-07 01:07:11.197151 | orchestrator | 2026-01-07 01:07:11 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:11.197159 | orchestrator | 2026-01-07 01:07:11 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:11.197166 | orchestrator | 2026-01-07 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:14.256227 | orchestrator | 2026-01-07 01:07:14 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state STARTED 2026-01-07 01:07:14.257452 | orchestrator | 2026-01-07 01:07:14 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:14.258626 | orchestrator | 2026-01-07 01:07:14 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:14.259816 | orchestrator | 2026-01-07 01:07:14 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:14.259892 | orchestrator | 2026-01-07 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:17.320553 | orchestrator | 2026-01-07 01:07:17 | INFO  | Task ecf05ac8-103f-465a-81ee-c4f4095dae1b is in state SUCCESS 2026-01-07 01:07:17.322445 | orchestrator | 2026-01-07 01:07:17.322495 | orchestrator | 2026-01-07 01:07:17.322502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:07:17.322509 | orchestrator | 2026-01-07 01:07:17.322514 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:07:17.322530 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.347) 0:00:00.347 ***** 2026-01-07 01:07:17.322536 | orchestrator | ok: [testbed-manager] 2026-01-07 01:07:17.322542 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:17.322546 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:07:17.322550 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:07:17.322553 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:07:17.322572 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:07:17.322577 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:07:17.322613 | orchestrator | 2026-01-07 01:07:17.322619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:07:17.322624 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.938) 0:00:01.285 ***** 2026-01-07 01:07:17.322629 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322635 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322639 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322642 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322647 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322652 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322658 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-07 01:07:17.322663 | orchestrator | 2026-01-07 01:07:17.322668 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-07 01:07:17.322673 | orchestrator | 2026-01-07 01:07:17.322679 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:07:17.322685 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.829) 0:00:02.114 ***** 2026-01-07 01:07:17.322688 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:07:17.322693 | orchestrator | 2026-01-07 01:07:17.322696 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-07 01:07:17.322699 | orchestrator | Wednesday 07 January 2026 01:03:44 +0000 (0:00:01.364) 0:00:03.478 ***** 2026-01-07 01:07:17.322703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:07:17.322761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.322825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.322831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.322836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.322866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.322880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323081 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:07:17.323096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323165 | orchestrator | 2026-01-07 01:07:17.323169 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:07:17.323173 | orchestrator | Wednesday 07 January 2026 01:03:47 +0000 (0:00:03.303) 0:00:06.782 ***** 2026-01-07 01:07:17.323176 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:07:17.323182 | orchestrator | 2026-01-07 01:07:17.323189 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-07 01:07:17.323197 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:01.438) 0:00:08.220 ***** 2026-01-07 01:07:17.323202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:07:17.323208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323329 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.323340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323386 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323419 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323440 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:07:17.323448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.323856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.323877 | orchestrator | 2026-01-07 01:07:17.323881 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-07 01:07:17.323884 | orchestrator | Wednesday 07 January 2026 01:03:53 +0000 (0:00:04.849) 0:00:13.069 ***** 2026-01-07 01:07:17.323888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.323892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323897 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 01:07:17.323900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.323909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.323914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.323918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 01:07:17.323928 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323931 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.323934 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.323938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.323941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.323947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.323967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.323973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.323976 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.323980 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.324167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324188 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.324191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324201 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.324204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324225 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.324228 | orchestrator | 2026-01-07 01:07:17.324232 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-07 01:07:17.324235 | orchestrator | Wednesday 07 January 2026 01:03:55 +0000 (0:00:01.544) 0:00:14.613 ***** 2026-01-07 01:07:17.324244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 01:07:17.324254 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324263 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 01:07:17.324269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324274 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.324293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324329 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.324335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324384 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.324389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324528 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.324539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:07:17.324614 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.324619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324635 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.324640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:07:17.324646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:07:17.324677 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.324682 | orchestrator | 2026-01-07 01:07:17.324687 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-07 01:07:17.324693 | orchestrator | Wednesday 07 January 2026 01:03:57 +0000 (0:00:01.879) 0:00:16.493 ***** 2026-01-07 01:07:17.324698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:07:17.324704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.324872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324936 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:07:17.324942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.324994 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.324997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.325000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.325004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.325009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.325018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.325024 | orchestrator | 2026-01-07 01:07:17.325030 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-07 01:07:17.325036 | orchestrator | Wednesday 07 January 2026 01:04:03 +0000 (0:00:06.593) 0:00:23.086 ***** 2026-01-07 01:07:17.325041 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:07:17.325046 | orchestrator | 2026-01-07 01:07:17.325052 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-07 01:07:17.325066 | orchestrator | Wednesday 07 January 2026 01:04:05 +0000 (0:00:01.304) 0:00:24.390 ***** 2026-01-07 01:07:17.325072 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325077 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325080 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325087 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325093 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325105 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325114 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325118 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097545, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6626408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325121 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325126 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325132 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325163 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325173 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325178 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325187 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325208 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325216 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325220 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325223 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325232 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325235 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097578, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6725194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325239 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325259 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325269 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325275 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325278 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325297 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325301 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325304 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325309 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325316 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325319 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325332 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097535, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6585193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325336 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325339 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325344 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325347 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325354 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325367 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325371 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325375 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325380 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325383 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325386 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325390 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325404 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325410 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325421 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097570, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325433 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325438 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325443 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325465 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325472 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325480 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325484 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325487 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325490 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325494 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325510 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325521 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325533 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325536 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325545 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325549 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325555 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325559 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325567 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325570 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325607 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325612 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325618 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325623 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325628 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325639 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097531, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.656862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325646 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325650 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325654 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325659 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325665 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325670 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325682 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325694 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.325700 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325711 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.325714 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325718 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325722 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325736 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325741 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.325744 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325755 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325760 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.325766 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325771 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097554, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325834 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325840 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.325845 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:07:17.325851 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.325856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097567, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6700304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325862 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097556, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6659632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097540, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6609054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325873 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097577, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6718338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325882 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097526, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6560168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325891 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097587, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6757498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325897 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097574, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6715965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325922 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097533, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6579547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325926 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097530, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6563697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325930 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097564, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6685195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325933 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097560, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6678672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097584, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6747568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:07:17.325942 | orchestrator | 2026-01-07 01:07:17.325945 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-07 01:07:17.325948 | orchestrator | Wednesday 07 January 2026 01:04:35 +0000 (0:00:29.983) 0:00:54.374 ***** 2026-01-07 01:07:17.325953 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:07:17.325958 | orchestrator | 2026-01-07 01:07:17.325966 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-07 01:07:17.325972 | orchestrator | Wednesday 07 January 2026 01:04:36 +0000 (0:00:01.620) 0:00:55.994 ***** 2026-01-07 01:07:17.325979 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.325984 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.325990 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.325994 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326000 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326005 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 01:07:17.326010 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326075 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326086 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326091 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:07:17.326096 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326107 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326117 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326122 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326128 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326133 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326138 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326143 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326148 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326158 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326168 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326171 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326182 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326185 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326189 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326192 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326195 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326198 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-07 01:07:17.326201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:07:17.326204 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-07 01:07:17.326208 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:07:17.326211 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 01:07:17.326214 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:07:17.326217 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:07:17.326220 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:07:17.326223 | orchestrator | 2026-01-07 01:07:17.326226 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-07 01:07:17.326229 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:03.612) 0:00:59.606 ***** 2026-01-07 01:07:17.326233 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326236 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326239 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326242 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326245 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326249 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326252 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326255 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326258 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326261 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326264 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:07:17.326267 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326270 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-07 01:07:17.326274 | orchestrator | 2026-01-07 01:07:17.326277 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-07 01:07:17.326280 | orchestrator | Wednesday 07 January 2026 01:05:06 +0000 (0:00:25.712) 0:01:25.319 ***** 2026-01-07 01:07:17.326287 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326290 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326295 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326303 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326308 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326313 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326318 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326323 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326327 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326332 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326338 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:07:17.326349 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326354 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-07 01:07:17.326359 | orchestrator | 2026-01-07 01:07:17.326364 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-07 01:07:17.326369 | orchestrator | Wednesday 07 January 2026 01:05:10 +0000 (0:00:04.646) 0:01:29.965 ***** 2026-01-07 01:07:17.326374 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326380 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326385 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326390 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326396 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326401 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326406 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326411 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326416 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-07 01:07:17.326421 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326426 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326431 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:07:17.326437 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326442 | orchestrator | 2026-01-07 01:07:17.326447 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-07 01:07:17.326453 | orchestrator | Wednesday 07 January 2026 01:05:14 +0000 (0:00:03.676) 0:01:33.641 ***** 2026-01-07 01:07:17.326458 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:07:17.326463 | orchestrator | 2026-01-07 01:07:17.326468 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-07 01:07:17.326473 | orchestrator | Wednesday 07 January 2026 01:05:15 +0000 (0:00:01.016) 0:01:34.658 ***** 2026-01-07 01:07:17.326478 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.326483 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326495 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326500 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326505 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326510 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326515 | orchestrator | 2026-01-07 01:07:17.326521 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-07 01:07:17.326526 | orchestrator | Wednesday 07 January 2026 01:05:15 +0000 (0:00:00.503) 0:01:35.161 ***** 2026-01-07 01:07:17.326531 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.326537 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326542 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326547 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326552 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.326557 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.326562 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.326568 | orchestrator | 2026-01-07 01:07:17.326573 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-07 01:07:17.326578 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:02.815) 0:01:37.977 ***** 2026-01-07 01:07:17.326606 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326611 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326617 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326622 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.326627 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326633 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326638 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326643 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326652 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326657 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326665 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326670 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326676 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:07:17.326681 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326686 | orchestrator | 2026-01-07 01:07:17.326692 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-07 01:07:17.326697 | orchestrator | Wednesday 07 January 2026 01:05:23 +0000 (0:00:04.305) 0:01:42.283 ***** 2026-01-07 01:07:17.326701 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326709 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326712 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326716 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326719 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326722 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326725 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326728 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326731 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326734 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-07 01:07:17.326737 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:07:17.326741 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326744 | orchestrator | 2026-01-07 01:07:17.326747 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-07 01:07:17.326750 | orchestrator | Wednesday 07 January 2026 01:05:24 +0000 (0:00:01.675) 0:01:43.958 ***** 2026-01-07 01:07:17.326753 | orchestrator | [WARNING]: Skipped 2026-01-07 01:07:17.326756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-07 01:07:17.326759 | orchestrator | due to this access issue: 2026-01-07 01:07:17.326762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-07 01:07:17.326765 | orchestrator | not a directory 2026-01-07 01:07:17.326769 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:07:17.326772 | orchestrator | 2026-01-07 01:07:17.326775 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-07 01:07:17.326778 | orchestrator | Wednesday 07 January 2026 01:05:25 +0000 (0:00:01.083) 0:01:45.041 ***** 2026-01-07 01:07:17.326785 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.326790 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326795 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326800 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326806 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326811 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326815 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326820 | orchestrator | 2026-01-07 01:07:17.326825 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-07 01:07:17.326830 | orchestrator | Wednesday 07 January 2026 01:05:27 +0000 (0:00:01.569) 0:01:46.611 ***** 2026-01-07 01:07:17.326835 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.326841 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:17.326846 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:17.326851 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:17.326856 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:07:17.326861 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:07:17.326865 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:07:17.326870 | orchestrator | 2026-01-07 01:07:17.326875 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-07 01:07:17.326881 | orchestrator | Wednesday 07 January 2026 01:05:28 +0000 (0:00:01.005) 0:01:47.617 ***** 2026-01-07 01:07:17.326887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326902 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:07:17.326909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.326921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326930 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.326936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.326959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.326965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:07:17.326982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:07:17.326990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.326995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:07:17.327084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:07:17.327094 | orchestrator | 2026-01-07 01:07:17.327100 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-07 01:07:17.327105 | orchestrator | Wednesday 07 January 2026 01:05:34 +0000 (0:00:05.784) 0:01:53.401 ***** 2026-01-07 01:07:17.327111 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 01:07:17.327117 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:07:17.327123 | orchestrator | 2026-01-07 01:07:17.327128 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327133 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:01.911) 0:01:55.313 ***** 2026-01-07 01:07:17.327139 | orchestrator | 2026-01-07 01:07:17.327144 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327150 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.057) 0:01:55.370 ***** 2026-01-07 01:07:17.327155 | orchestrator | 2026-01-07 01:07:17.327160 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327164 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.055) 0:01:55.425 ***** 2026-01-07 01:07:17.327169 | orchestrator | 2026-01-07 01:07:17.327174 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327179 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.060) 0:01:55.485 ***** 2026-01-07 01:07:17.327184 | orchestrator | 2026-01-07 01:07:17.327189 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327193 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.159) 0:01:55.645 ***** 2026-01-07 01:07:17.327198 | orchestrator | 2026-01-07 01:07:17.327203 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327208 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.053) 0:01:55.698 ***** 2026-01-07 01:07:17.327213 | orchestrator | 2026-01-07 01:07:17.327218 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:07:17.327224 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.112) 0:01:55.811 ***** 2026-01-07 01:07:17.327227 | orchestrator | 2026-01-07 01:07:17.327230 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-07 01:07:17.327234 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.150) 0:01:55.962 ***** 2026-01-07 01:07:17.327237 | orchestrator | changed: [testbed-manager] 2026-01-07 01:07:17.327240 | orchestrator | 2026-01-07 01:07:17.327243 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-07 01:07:17.327251 | orchestrator | Wednesday 07 January 2026 01:06:02 +0000 (0:00:25.626) 0:02:21.588 ***** 2026-01-07 01:07:17.327254 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:07:17.327258 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:07:17.327261 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:07:17.327266 | orchestrator | changed: [testbed-manager] 2026-01-07 01:07:17.327269 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.327272 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.327275 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.327278 | orchestrator | 2026-01-07 01:07:17.327282 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-07 01:07:17.327287 | orchestrator | Wednesday 07 January 2026 01:06:15 +0000 (0:00:13.110) 0:02:34.699 ***** 2026-01-07 01:07:17.327292 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.327299 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.327306 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.327311 | orchestrator | 2026-01-07 01:07:17.327315 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-07 01:07:17.327321 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:04.878) 0:02:39.577 ***** 2026-01-07 01:07:17.327326 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.327331 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.327335 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.327340 | orchestrator | 2026-01-07 01:07:17.327345 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-07 01:07:17.327349 | orchestrator | Wednesday 07 January 2026 01:06:26 +0000 (0:00:06.316) 0:02:45.894 ***** 2026-01-07 01:07:17.327354 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:07:17.327359 | orchestrator | changed: [testbed-manager] 2026-01-07 01:07:17.327363 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:07:17.327368 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.327373 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.327377 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.327382 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:07:17.327387 | orchestrator | 2026-01-07 01:07:17.327392 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-07 01:07:17.327397 | orchestrator | Wednesday 07 January 2026 01:06:41 +0000 (0:00:15.223) 0:03:01.117 ***** 2026-01-07 01:07:17.327402 | orchestrator | changed: [testbed-manager] 2026-01-07 01:07:17.327406 | orchestrator | 2026-01-07 01:07:17.327412 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-07 01:07:17.327417 | orchestrator | Wednesday 07 January 2026 01:06:55 +0000 (0:00:13.770) 0:03:14.888 ***** 2026-01-07 01:07:17.327421 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:17.327424 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:17.327427 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:17.327430 | orchestrator | 2026-01-07 01:07:17.327434 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-07 01:07:17.327437 | orchestrator | Wednesday 07 January 2026 01:07:00 +0000 (0:00:04.437) 0:03:19.326 ***** 2026-01-07 01:07:17.327440 | orchestrator | changed: [testbed-manager] 2026-01-07 01:07:17.327443 | orchestrator | 2026-01-07 01:07:17.327446 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-07 01:07:17.327449 | orchestrator | Wednesday 07 January 2026 01:07:10 +0000 (0:00:09.980) 0:03:29.306 ***** 2026-01-07 01:07:17.327452 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:07:17.327455 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:07:17.327458 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:07:17.327461 | orchestrator | 2026-01-07 01:07:17.327464 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:07:17.327468 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:07:17.327475 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:07:17.327478 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:07:17.327481 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:07:17.327484 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:07:17.327488 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:07:17.327491 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:07:17.327494 | orchestrator | 2026-01-07 01:07:17.327497 | orchestrator | 2026-01-07 01:07:17.327500 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:07:17.327503 | orchestrator | Wednesday 07 January 2026 01:07:15 +0000 (0:00:05.411) 0:03:34.718 ***** 2026-01-07 01:07:17.327506 | orchestrator | =============================================================================== 2026-01-07 01:07:17.327509 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.98s 2026-01-07 01:07:17.327512 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 25.71s 2026-01-07 01:07:17.327516 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 25.63s 2026-01-07 01:07:17.327519 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.22s 2026-01-07 01:07:17.327524 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.77s 2026-01-07 01:07:17.327527 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.11s 2026-01-07 01:07:17.327531 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.98s 2026-01-07 01:07:17.327538 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.59s 2026-01-07 01:07:17.327541 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.32s 2026-01-07 01:07:17.327544 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.78s 2026-01-07 01:07:17.327547 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.41s 2026-01-07 01:07:17.327550 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.88s 2026-01-07 01:07:17.327553 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.85s 2026-01-07 01:07:17.327556 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.65s 2026-01-07 01:07:17.327559 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.44s 2026-01-07 01:07:17.327562 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 4.31s 2026-01-07 01:07:17.327566 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.68s 2026-01-07 01:07:17.327569 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.61s 2026-01-07 01:07:17.327572 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.30s 2026-01-07 01:07:17.327575 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.82s 2026-01-07 01:07:17.327578 | orchestrator | 2026-01-07 01:07:17 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:17.327595 | orchestrator | 2026-01-07 01:07:17 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:17.327817 | orchestrator | 2026-01-07 01:07:17 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:17.329258 | orchestrator | 2026-01-07 01:07:17 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:17.329390 | orchestrator | 2026-01-07 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:20.389077 | orchestrator | 2026-01-07 01:07:20 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:20.394504 | orchestrator | 2026-01-07 01:07:20 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:20.398822 | orchestrator | 2026-01-07 01:07:20 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:20.402635 | orchestrator | 2026-01-07 01:07:20 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:20.402686 | orchestrator | 2026-01-07 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:23.443035 | orchestrator | 2026-01-07 01:07:23 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:23.445288 | orchestrator | 2026-01-07 01:07:23 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:23.447237 | orchestrator | 2026-01-07 01:07:23 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:23.449201 | orchestrator | 2026-01-07 01:07:23 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:23.449289 | orchestrator | 2026-01-07 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:26.490810 | orchestrator | 2026-01-07 01:07:26 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:26.491517 | orchestrator | 2026-01-07 01:07:26 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:26.492789 | orchestrator | 2026-01-07 01:07:26 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:26.493661 | orchestrator | 2026-01-07 01:07:26 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:26.493884 | orchestrator | 2026-01-07 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:29.545720 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:29.547963 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:29.550447 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:29.554337 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:29.554806 | orchestrator | 2026-01-07 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:32.606296 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:32.608143 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:32.609998 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:32.611627 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:32.611946 | orchestrator | 2026-01-07 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:35.655576 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:35.657856 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:35.658692 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:35.660530 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:35.660560 | orchestrator | 2026-01-07 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:38.716037 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:38.717026 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:38.719024 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:38.721508 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:38.721559 | orchestrator | 2026-01-07 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:41.759692 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:41.760405 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:41.761252 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:41.762303 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:41.762348 | orchestrator | 2026-01-07 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:44.811616 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:44.812069 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:44.813164 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:44.814045 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:44.814199 | orchestrator | 2026-01-07 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:47.861261 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:47.863302 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:47.865705 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:47.868704 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:47.869398 | orchestrator | 2026-01-07 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:50.924141 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:50.927132 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:50.929211 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:50.930823 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:50.930897 | orchestrator | 2026-01-07 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:53.987506 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:53.990073 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:53.992760 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:53.994657 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:53.994704 | orchestrator | 2026-01-07 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:57.042001 | orchestrator | 2026-01-07 01:07:57 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:07:57.046228 | orchestrator | 2026-01-07 01:07:57 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:07:57.046292 | orchestrator | 2026-01-07 01:07:57 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:07:57.050163 | orchestrator | 2026-01-07 01:07:57 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:07:57.050224 | orchestrator | 2026-01-07 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:00.088643 | orchestrator | 2026-01-07 01:08:00 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:00.090720 | orchestrator | 2026-01-07 01:08:00 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:08:00.094598 | orchestrator | 2026-01-07 01:08:00 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:00.097363 | orchestrator | 2026-01-07 01:08:00 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:00.097410 | orchestrator | 2026-01-07 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:03.147658 | orchestrator | 2026-01-07 01:08:03 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:03.150247 | orchestrator | 2026-01-07 01:08:03 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state STARTED 2026-01-07 01:08:03.150304 | orchestrator | 2026-01-07 01:08:03 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:03.150890 | orchestrator | 2026-01-07 01:08:03 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:03.151084 | orchestrator | 2026-01-07 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:06.204225 | orchestrator | 2026-01-07 01:08:06 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:06.209486 | orchestrator | 2026-01-07 01:08:06 | INFO  | Task b2ed3a42-fe85-4f43-a33b-5aeb575c823f is in state SUCCESS 2026-01-07 01:08:06.212235 | orchestrator | 2026-01-07 01:08:06.212337 | orchestrator | 2026-01-07 01:08:06.212347 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:06.212354 | orchestrator | 2026-01-07 01:08:06.212359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:06.212365 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:00.310) 0:00:00.310 ***** 2026-01-07 01:08:06.212370 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:06.212375 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:06.212380 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:06.212386 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:06.212391 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:06.212396 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:06.212417 | orchestrator | 2026-01-07 01:08:06.212423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:06.212428 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:00.947) 0:00:01.257 ***** 2026-01-07 01:08:06.212434 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-07 01:08:06.212439 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-07 01:08:06.212445 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-07 01:08:06.212484 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-07 01:08:06.212488 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-07 01:08:06.212491 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-07 01:08:06.212494 | orchestrator | 2026-01-07 01:08:06.212498 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-07 01:08:06.212501 | orchestrator | 2026-01-07 01:08:06.212506 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:08:06.212512 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:00.645) 0:00:01.902 ***** 2026-01-07 01:08:06.212518 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:06.212524 | orchestrator | 2026-01-07 01:08:06.212528 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-07 01:08:06.212534 | orchestrator | Wednesday 07 January 2026 01:03:45 +0000 (0:00:01.786) 0:00:03.689 ***** 2026-01-07 01:08:06.212820 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:06.212826 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:06.212829 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:06.212833 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:06.212836 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:06.212839 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:06.212842 | orchestrator | 2026-01-07 01:08:06.212853 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-07 01:08:06.212858 | orchestrator | Wednesday 07 January 2026 01:03:46 +0000 (0:00:01.351) 0:00:05.040 ***** 2026-01-07 01:08:06.212863 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:06.212870 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:06.212876 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:06.212898 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:06.212903 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:06.212908 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:06.212912 | orchestrator | 2026-01-07 01:08:06.212918 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-07 01:08:06.212922 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:01.104) 0:00:06.145 ***** 2026-01-07 01:08:06.213017 | orchestrator | ok: [testbed-node-0] => { 2026-01-07 01:08:06.213025 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213031 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213037 | orchestrator | } 2026-01-07 01:08:06.213042 | orchestrator | ok: [testbed-node-1] => { 2026-01-07 01:08:06.213047 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213054 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213059 | orchestrator | } 2026-01-07 01:08:06.213065 | orchestrator | ok: [testbed-node-2] => { 2026-01-07 01:08:06.213070 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213075 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213080 | orchestrator | } 2026-01-07 01:08:06.213085 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 01:08:06.213091 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213096 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213101 | orchestrator | } 2026-01-07 01:08:06.213106 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 01:08:06.213111 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213116 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213121 | orchestrator | } 2026-01-07 01:08:06.213126 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 01:08:06.213142 | orchestrator |  "changed": false, 2026-01-07 01:08:06.213147 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:08:06.213152 | orchestrator | } 2026-01-07 01:08:06.213157 | orchestrator | 2026-01-07 01:08:06.213162 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-07 01:08:06.213167 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:00.852) 0:00:06.997 ***** 2026-01-07 01:08:06.213172 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.213176 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.213181 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.213186 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.213191 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.213196 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.213201 | orchestrator | 2026-01-07 01:08:06.213206 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-07 01:08:06.213211 | orchestrator | Wednesday 07 January 2026 01:03:49 +0000 (0:00:00.770) 0:00:07.767 ***** 2026-01-07 01:08:06.213216 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-07 01:08:06.213221 | orchestrator | 2026-01-07 01:08:06.213225 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-07 01:08:06.213230 | orchestrator | Wednesday 07 January 2026 01:03:53 +0000 (0:00:03.569) 0:00:11.336 ***** 2026-01-07 01:08:06.213235 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-07 01:08:06.213241 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-07 01:08:06.213246 | orchestrator | 2026-01-07 01:08:06.213278 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-07 01:08:06.213284 | orchestrator | Wednesday 07 January 2026 01:04:00 +0000 (0:00:07.236) 0:00:18.573 ***** 2026-01-07 01:08:06.213289 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:08:06.213294 | orchestrator | 2026-01-07 01:08:06.213298 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-07 01:08:06.213303 | orchestrator | Wednesday 07 January 2026 01:04:03 +0000 (0:00:03.538) 0:00:22.111 ***** 2026-01-07 01:08:06.213308 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:08:06.213313 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-07 01:08:06.213318 | orchestrator | 2026-01-07 01:08:06.213323 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-07 01:08:06.213328 | orchestrator | Wednesday 07 January 2026 01:04:08 +0000 (0:00:04.284) 0:00:26.396 ***** 2026-01-07 01:08:06.213332 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:08:06.213337 | orchestrator | 2026-01-07 01:08:06.213342 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-07 01:08:06.213347 | orchestrator | Wednesday 07 January 2026 01:04:12 +0000 (0:00:04.136) 0:00:30.534 ***** 2026-01-07 01:08:06.213352 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-07 01:08:06.213357 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-07 01:08:06.213362 | orchestrator | 2026-01-07 01:08:06.213367 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:08:06.213371 | orchestrator | Wednesday 07 January 2026 01:04:20 +0000 (0:00:07.702) 0:00:38.236 ***** 2026-01-07 01:08:06.213376 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.213381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.213386 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.213391 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.213396 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.213401 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.213406 | orchestrator | 2026-01-07 01:08:06.213411 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-07 01:08:06.213420 | orchestrator | Wednesday 07 January 2026 01:04:21 +0000 (0:00:00.913) 0:00:39.150 ***** 2026-01-07 01:08:06.213425 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.213430 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.213435 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.213440 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.213445 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.213450 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.213455 | orchestrator | 2026-01-07 01:08:06.213465 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-07 01:08:06.213470 | orchestrator | Wednesday 07 January 2026 01:04:24 +0000 (0:00:03.064) 0:00:42.214 ***** 2026-01-07 01:08:06.213475 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:06.213480 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:06.213485 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:06.213489 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:06.213494 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:06.213500 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:06.213505 | orchestrator | 2026-01-07 01:08:06.213513 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 01:08:06.213518 | orchestrator | Wednesday 07 January 2026 01:04:25 +0000 (0:00:01.182) 0:00:43.396 ***** 2026-01-07 01:08:06.213523 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.213528 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.213533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.213538 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.213543 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.213549 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.213554 | orchestrator | 2026-01-07 01:08:06.213559 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-07 01:08:06.213564 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:03.320) 0:00:46.717 ***** 2026-01-07 01:08:06.213571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213655 | orchestrator | 2026-01-07 01:08:06.213660 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-07 01:08:06.213665 | orchestrator | Wednesday 07 January 2026 01:04:32 +0000 (0:00:03.907) 0:00:50.625 ***** 2026-01-07 01:08:06.213670 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:06.213675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-07 01:08:06.213680 | orchestrator | due to this access issue: 2026-01-07 01:08:06.213685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-07 01:08:06.213689 | orchestrator | a directory 2026-01-07 01:08:06.213694 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:08:06.213699 | orchestrator | 2026-01-07 01:08:06.213703 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:08:06.213723 | orchestrator | Wednesday 07 January 2026 01:04:33 +0000 (0:00:00.761) 0:00:51.387 ***** 2026-01-07 01:08:06.213729 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:06.213738 | orchestrator | 2026-01-07 01:08:06.213742 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-07 01:08:06.213747 | orchestrator | Wednesday 07 January 2026 01:04:34 +0000 (0:00:01.164) 0:00:52.551 ***** 2026-01-07 01:08:06.213752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.213798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.213803 | orchestrator | 2026-01-07 01:08:06.213808 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-07 01:08:06.213812 | orchestrator | Wednesday 07 January 2026 01:04:38 +0000 (0:00:03.669) 0:00:56.220 ***** 2026-01-07 01:08:06.213820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.213825 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.213830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.213836 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.213841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.213949 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.213958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.213963 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.213968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.213973 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.213981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.213987 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.213993 | orchestrator | 2026-01-07 01:08:06.213998 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-07 01:08:06.214003 | orchestrator | Wednesday 07 January 2026 01:04:43 +0000 (0:00:05.274) 0:01:01.495 ***** 2026-01-07 01:08:06.214008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214051 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214079 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214089 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214104 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214115 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214127 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214133 | orchestrator | 2026-01-07 01:08:06.214138 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-07 01:08:06.214143 | orchestrator | Wednesday 07 January 2026 01:04:46 +0000 (0:00:03.373) 0:01:04.869 ***** 2026-01-07 01:08:06.214148 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214153 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214159 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214164 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214169 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214174 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214179 | orchestrator | 2026-01-07 01:08:06.214185 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-07 01:08:06.214195 | orchestrator | Wednesday 07 January 2026 01:04:50 +0000 (0:00:03.339) 0:01:08.208 ***** 2026-01-07 01:08:06.214200 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214205 | orchestrator | 2026-01-07 01:08:06.214217 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-07 01:08:06.214222 | orchestrator | Wednesday 07 January 2026 01:04:50 +0000 (0:00:00.102) 0:01:08.311 ***** 2026-01-07 01:08:06.214227 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214231 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214236 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214240 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214245 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214250 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214255 | orchestrator | 2026-01-07 01:08:06.214259 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-07 01:08:06.214264 | orchestrator | Wednesday 07 January 2026 01:04:50 +0000 (0:00:00.641) 0:01:08.952 ***** 2026-01-07 01:08:06.214269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214274 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214290 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214299 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214309 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214325 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214335 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214340 | orchestrator | 2026-01-07 01:08:06.214346 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-07 01:08:06.214353 | orchestrator | Wednesday 07 January 2026 01:04:54 +0000 (0:00:03.269) 0:01:12.222 ***** 2026-01-07 01:08:06.214358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214407 | orchestrator | 2026-01-07 01:08:06.214412 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-07 01:08:06.214417 | orchestrator | Wednesday 07 January 2026 01:04:58 +0000 (0:00:04.378) 0:01:16.600 ***** 2026-01-07 01:08:06.214422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.214462 | orchestrator | 2026-01-07 01:08:06.214467 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-07 01:08:06.214472 | orchestrator | Wednesday 07 January 2026 01:05:04 +0000 (0:00:06.250) 0:01:22.851 ***** 2026-01-07 01:08:06.214481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214486 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214500 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214515 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214525 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214536 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.214551 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214556 | orchestrator | 2026-01-07 01:08:06.214562 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-07 01:08:06.214568 | orchestrator | Wednesday 07 January 2026 01:05:07 +0000 (0:00:03.151) 0:01:26.002 ***** 2026-01-07 01:08:06.214574 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214583 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214588 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214594 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:06.214600 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:06.214606 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:06.214612 | orchestrator | 2026-01-07 01:08:06.214618 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-07 01:08:06.214624 | orchestrator | Wednesday 07 January 2026 01:05:10 +0000 (0:00:03.077) 0:01:29.080 ***** 2026-01-07 01:08:06.214633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214640 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214687 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.214701 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.214747 | orchestrator | 2026-01-07 01:08:06.214754 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-07 01:08:06.214762 | orchestrator | Wednesday 07 January 2026 01:05:16 +0000 (0:00:05.153) 0:01:34.233 ***** 2026-01-07 01:08:06.214769 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214776 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214782 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214790 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214797 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214803 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214810 | orchestrator | 2026-01-07 01:08:06.214817 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-07 01:08:06.214823 | orchestrator | Wednesday 07 January 2026 01:05:19 +0000 (0:00:03.370) 0:01:37.603 ***** 2026-01-07 01:08:06.214830 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214836 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214841 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214847 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214853 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214857 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214862 | orchestrator | 2026-01-07 01:08:06.214867 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-07 01:08:06.214873 | orchestrator | Wednesday 07 January 2026 01:05:23 +0000 (0:00:03.853) 0:01:41.457 ***** 2026-01-07 01:08:06.214878 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214913 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214918 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214923 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214927 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214932 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214936 | orchestrator | 2026-01-07 01:08:06.214941 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-07 01:08:06.214946 | orchestrator | Wednesday 07 January 2026 01:05:25 +0000 (0:00:02.257) 0:01:43.714 ***** 2026-01-07 01:08:06.214950 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.214959 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.214964 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.214969 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.214974 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.214979 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.214984 | orchestrator | 2026-01-07 01:08:06.214989 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-07 01:08:06.214994 | orchestrator | Wednesday 07 January 2026 01:05:28 +0000 (0:00:02.505) 0:01:46.220 ***** 2026-01-07 01:08:06.215000 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215006 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215011 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215017 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215027 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215033 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215039 | orchestrator | 2026-01-07 01:08:06.215044 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-07 01:08:06.215049 | orchestrator | Wednesday 07 January 2026 01:05:31 +0000 (0:00:03.299) 0:01:49.520 ***** 2026-01-07 01:08:06.215055 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215060 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215064 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215069 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215073 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215078 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215082 | orchestrator | 2026-01-07 01:08:06.215086 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-07 01:08:06.215091 | orchestrator | Wednesday 07 January 2026 01:05:33 +0000 (0:00:02.478) 0:01:51.998 ***** 2026-01-07 01:08:06.215095 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215101 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215106 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215116 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215121 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215126 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215131 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215136 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215141 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215145 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:08:06.215151 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215155 | orchestrator | 2026-01-07 01:08:06.215160 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-07 01:08:06.215164 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:02.419) 0:01:54.418 ***** 2026-01-07 01:08:06.215174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215185 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215196 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215210 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215220 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215232 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215244 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215248 | orchestrator | 2026-01-07 01:08:06.215253 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-07 01:08:06.215257 | orchestrator | Wednesday 07 January 2026 01:05:38 +0000 (0:00:01.874) 0:01:56.292 ***** 2026-01-07 01:08:06.215262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215266 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215280 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215289 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215304 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215313 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215322 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215327 | orchestrator | 2026-01-07 01:08:06.215332 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-07 01:08:06.215337 | orchestrator | Wednesday 07 January 2026 01:05:40 +0000 (0:00:02.082) 0:01:58.374 ***** 2026-01-07 01:08:06.215341 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215348 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215353 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215357 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215362 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215366 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215370 | orchestrator | 2026-01-07 01:08:06.215374 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-07 01:08:06.215379 | orchestrator | Wednesday 07 January 2026 01:05:42 +0000 (0:00:01.843) 0:02:00.217 ***** 2026-01-07 01:08:06.215383 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215388 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215392 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215396 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:08:06.215401 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:08:06.215405 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:08:06.215410 | orchestrator | 2026-01-07 01:08:06.215414 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-07 01:08:06.215418 | orchestrator | Wednesday 07 January 2026 01:05:45 +0000 (0:00:03.629) 0:02:03.847 ***** 2026-01-07 01:08:06.215423 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215427 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215432 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215436 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215441 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215448 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215453 | orchestrator | 2026-01-07 01:08:06.215457 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-07 01:08:06.215462 | orchestrator | Wednesday 07 January 2026 01:05:50 +0000 (0:00:04.598) 0:02:08.446 ***** 2026-01-07 01:08:06.215466 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215475 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215480 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215484 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215489 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215493 | orchestrator | 2026-01-07 01:08:06.215498 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-07 01:08:06.215502 | orchestrator | Wednesday 07 January 2026 01:05:54 +0000 (0:00:03.870) 0:02:12.316 ***** 2026-01-07 01:08:06.215507 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215511 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215516 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215520 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215525 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215529 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215534 | orchestrator | 2026-01-07 01:08:06.215538 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-07 01:08:06.215545 | orchestrator | Wednesday 07 January 2026 01:05:56 +0000 (0:00:02.408) 0:02:14.725 ***** 2026-01-07 01:08:06.215550 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215554 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215559 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215563 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215568 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215608 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215613 | orchestrator | 2026-01-07 01:08:06.215618 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-07 01:08:06.215622 | orchestrator | Wednesday 07 January 2026 01:05:58 +0000 (0:00:02.222) 0:02:16.948 ***** 2026-01-07 01:08:06.215627 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215632 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215636 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215641 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215645 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215650 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215655 | orchestrator | 2026-01-07 01:08:06.215659 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-07 01:08:06.215664 | orchestrator | Wednesday 07 January 2026 01:06:00 +0000 (0:00:01.749) 0:02:18.697 ***** 2026-01-07 01:08:06.215669 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215673 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215678 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215683 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215688 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215692 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215697 | orchestrator | 2026-01-07 01:08:06.215702 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-07 01:08:06.215706 | orchestrator | Wednesday 07 January 2026 01:06:03 +0000 (0:00:02.677) 0:02:21.375 ***** 2026-01-07 01:08:06.215711 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215716 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215721 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215725 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215730 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215735 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215740 | orchestrator | 2026-01-07 01:08:06.215744 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-07 01:08:06.215756 | orchestrator | Wednesday 07 January 2026 01:06:09 +0000 (0:00:05.928) 0:02:27.303 ***** 2026-01-07 01:08:06.215761 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215767 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215772 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215777 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215782 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215786 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215791 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215795 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215806 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215811 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215816 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:08:06.215821 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215826 | orchestrator | 2026-01-07 01:08:06.215831 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-07 01:08:06.215837 | orchestrator | Wednesday 07 January 2026 01:06:11 +0000 (0:00:02.700) 0:02:30.004 ***** 2026-01-07 01:08:06.215843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215849 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.215859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215864 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.215869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215914 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.215921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:08:06.215926 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.215936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215941 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.215946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:08:06.215951 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.215956 | orchestrator | 2026-01-07 01:08:06.215961 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-07 01:08:06.215966 | orchestrator | Wednesday 07 January 2026 01:06:13 +0000 (0:00:02.062) 0:02:32.067 ***** 2026-01-07 01:08:06.215975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.215981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.215993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.215998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.216003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:08:06.216014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:08:06.216022 | orchestrator | 2026-01-07 01:08:06.216026 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:08:06.216031 | orchestrator | Wednesday 07 January 2026 01:06:16 +0000 (0:00:02.964) 0:02:35.031 ***** 2026-01-07 01:08:06.216036 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:06.216040 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:06.216045 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:06.216050 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:06.216055 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:06.216060 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:06.216065 | orchestrator | 2026-01-07 01:08:06.216069 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-07 01:08:06.216074 | orchestrator | Wednesday 07 January 2026 01:06:17 +0000 (0:00:00.490) 0:02:35.521 ***** 2026-01-07 01:08:06.216079 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:06.216084 | orchestrator | 2026-01-07 01:08:06.216088 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-07 01:08:06.216093 | orchestrator | Wednesday 07 January 2026 01:06:19 +0000 (0:00:02.135) 0:02:37.657 ***** 2026-01-07 01:08:06.216098 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:06.216103 | orchestrator | 2026-01-07 01:08:06.216107 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-07 01:08:06.216112 | orchestrator | Wednesday 07 January 2026 01:06:21 +0000 (0:00:02.296) 0:02:39.954 ***** 2026-01-07 01:08:06.216117 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:06.216122 | orchestrator | 2026-01-07 01:08:06.216127 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216131 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:39.411) 0:03:19.365 ***** 2026-01-07 01:08:06.216136 | orchestrator | 2026-01-07 01:08:06.216140 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216145 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.069) 0:03:19.435 ***** 2026-01-07 01:08:06.216150 | orchestrator | 2026-01-07 01:08:06.216155 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216159 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.276) 0:03:19.712 ***** 2026-01-07 01:08:06.216164 | orchestrator | 2026-01-07 01:08:06.216169 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216173 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.066) 0:03:19.778 ***** 2026-01-07 01:08:06.216177 | orchestrator | 2026-01-07 01:08:06.216185 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216190 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.067) 0:03:19.845 ***** 2026-01-07 01:08:06.216194 | orchestrator | 2026-01-07 01:08:06.216199 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:08:06.216203 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.067) 0:03:19.912 ***** 2026-01-07 01:08:06.216208 | orchestrator | 2026-01-07 01:08:06.216213 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-07 01:08:06.216218 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:00.069) 0:03:19.982 ***** 2026-01-07 01:08:06.216222 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:06.216228 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:06.216233 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:06.216238 | orchestrator | 2026-01-07 01:08:06.216243 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-07 01:08:06.216247 | orchestrator | Wednesday 07 January 2026 01:07:22 +0000 (0:00:20.278) 0:03:40.260 ***** 2026-01-07 01:08:06.216253 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:08:06.216258 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:08:06.216266 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:08:06.216272 | orchestrator | 2026-01-07 01:08:06.216277 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:06.216282 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:08:06.216288 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:08:06.216293 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:08:06.216298 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:08:06.216303 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:08:06.216310 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:08:06.216315 | orchestrator | 2026-01-07 01:08:06.216320 | orchestrator | 2026-01-07 01:08:06.216325 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:06.216330 | orchestrator | Wednesday 07 January 2026 01:08:05 +0000 (0:00:43.521) 0:04:23.782 ***** 2026-01-07 01:08:06.216335 | orchestrator | =============================================================================== 2026-01-07 01:08:06.216340 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 43.52s 2026-01-07 01:08:06.216345 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.41s 2026-01-07 01:08:06.216349 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.28s 2026-01-07 01:08:06.216354 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.70s 2026-01-07 01:08:06.216359 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.24s 2026-01-07 01:08:06.216364 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.25s 2026-01-07 01:08:06.216369 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 5.93s 2026-01-07 01:08:06.216374 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.27s 2026-01-07 01:08:06.216379 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.15s 2026-01-07 01:08:06.216384 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.60s 2026-01-07 01:08:06.216389 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.38s 2026-01-07 01:08:06.216394 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.29s 2026-01-07 01:08:06.216399 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.14s 2026-01-07 01:08:06.216404 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.91s 2026-01-07 01:08:06.216408 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.87s 2026-01-07 01:08:06.216413 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.85s 2026-01-07 01:08:06.216418 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.67s 2026-01-07 01:08:06.216423 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.63s 2026-01-07 01:08:06.216428 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.57s 2026-01-07 01:08:06.216433 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.54s 2026-01-07 01:08:06.216438 | orchestrator | 2026-01-07 01:08:06 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:06.216443 | orchestrator | 2026-01-07 01:08:06 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:06.216454 | orchestrator | 2026-01-07 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:09.260351 | orchestrator | 2026-01-07 01:08:09 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:09.261965 | orchestrator | 2026-01-07 01:08:09 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:09.265249 | orchestrator | 2026-01-07 01:08:09 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:09.269663 | orchestrator | 2026-01-07 01:08:09 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:09.269726 | orchestrator | 2026-01-07 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:12.308821 | orchestrator | 2026-01-07 01:08:12 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:12.312902 | orchestrator | 2026-01-07 01:08:12 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:12.315315 | orchestrator | 2026-01-07 01:08:12 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:12.317404 | orchestrator | 2026-01-07 01:08:12 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:12.317473 | orchestrator | 2026-01-07 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:15.383333 | orchestrator | 2026-01-07 01:08:15 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:15.385773 | orchestrator | 2026-01-07 01:08:15 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:15.388025 | orchestrator | 2026-01-07 01:08:15 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state STARTED 2026-01-07 01:08:15.392920 | orchestrator | 2026-01-07 01:08:15 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:15.395213 | orchestrator | 2026-01-07 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:18.453655 | orchestrator | 2026-01-07 01:08:18 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:18.456521 | orchestrator | 2026-01-07 01:08:18 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:18.459423 | orchestrator | 2026-01-07 01:08:18 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:18.461817 | orchestrator | 2026-01-07 01:08:18 | INFO  | Task 52ed49e1-27a8-40c7-a0f8-3eaedab2c490 is in state SUCCESS 2026-01-07 01:08:18.463632 | orchestrator | 2026-01-07 01:08:18.463676 | orchestrator | 2026-01-07 01:08:18.463683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:18.463689 | orchestrator | 2026-01-07 01:08:18.463694 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:18.463700 | orchestrator | Wednesday 07 January 2026 01:07:14 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-01-07 01:08:18.463705 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:18.463710 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:18.463715 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:18.463719 | orchestrator | 2026-01-07 01:08:18.463724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:18.463729 | orchestrator | Wednesday 07 January 2026 01:07:14 +0000 (0:00:00.284) 0:00:00.563 ***** 2026-01-07 01:08:18.463734 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-07 01:08:18.463739 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-07 01:08:18.463744 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-07 01:08:18.463763 | orchestrator | 2026-01-07 01:08:18.463768 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-07 01:08:18.463773 | orchestrator | 2026-01-07 01:08:18.463778 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:08:18.463783 | orchestrator | Wednesday 07 January 2026 01:07:14 +0000 (0:00:00.450) 0:00:01.014 ***** 2026-01-07 01:08:18.463788 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:08:18.463793 | orchestrator | 2026-01-07 01:08:18.463798 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-07 01:08:18.463802 | orchestrator | Wednesday 07 January 2026 01:07:15 +0000 (0:00:00.557) 0:00:01.571 ***** 2026-01-07 01:08:18.463807 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-07 01:08:18.463812 | orchestrator | 2026-01-07 01:08:18.463817 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-07 01:08:18.463822 | orchestrator | Wednesday 07 January 2026 01:07:18 +0000 (0:00:03.227) 0:00:04.799 ***** 2026-01-07 01:08:18.463826 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-07 01:08:18.463832 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-07 01:08:18.463836 | orchestrator | 2026-01-07 01:08:18.463841 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-07 01:08:18.463846 | orchestrator | Wednesday 07 January 2026 01:07:24 +0000 (0:00:06.296) 0:00:11.095 ***** 2026-01-07 01:08:18.463851 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:08:18.463855 | orchestrator | 2026-01-07 01:08:18.463893 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-07 01:08:18.463898 | orchestrator | Wednesday 07 January 2026 01:07:27 +0000 (0:00:03.022) 0:00:14.118 ***** 2026-01-07 01:08:18.463903 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:08:18.463908 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-07 01:08:18.463913 | orchestrator | 2026-01-07 01:08:18.463918 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-07 01:08:18.463923 | orchestrator | Wednesday 07 January 2026 01:07:31 +0000 (0:00:03.695) 0:00:17.814 ***** 2026-01-07 01:08:18.463928 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:08:18.463933 | orchestrator | 2026-01-07 01:08:18.463938 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-07 01:08:18.463942 | orchestrator | Wednesday 07 January 2026 01:07:34 +0000 (0:00:03.046) 0:00:20.861 ***** 2026-01-07 01:08:18.463947 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-07 01:08:18.463974 | orchestrator | 2026-01-07 01:08:18.463979 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:08:18.463984 | orchestrator | Wednesday 07 January 2026 01:07:37 +0000 (0:00:03.234) 0:00:24.096 ***** 2026-01-07 01:08:18.463989 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.463994 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:18.463999 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:18.464003 | orchestrator | 2026-01-07 01:08:18.464008 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-07 01:08:18.464013 | orchestrator | Wednesday 07 January 2026 01:07:38 +0000 (0:00:00.309) 0:00:24.405 ***** 2026-01-07 01:08:18.464028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464061 | orchestrator | 2026-01-07 01:08:18.464066 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-07 01:08:18.464070 | orchestrator | Wednesday 07 January 2026 01:07:39 +0000 (0:00:00.910) 0:00:25.316 ***** 2026-01-07 01:08:18.464075 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.464080 | orchestrator | 2026-01-07 01:08:18.464085 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-07 01:08:18.464089 | orchestrator | Wednesday 07 January 2026 01:07:39 +0000 (0:00:00.137) 0:00:25.454 ***** 2026-01-07 01:08:18.464094 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.464099 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:18.464104 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:18.464109 | orchestrator | 2026-01-07 01:08:18.464113 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:08:18.464118 | orchestrator | Wednesday 07 January 2026 01:07:39 +0000 (0:00:00.557) 0:00:26.012 ***** 2026-01-07 01:08:18.464123 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:08:18.464128 | orchestrator | 2026-01-07 01:08:18.464132 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-07 01:08:18.464137 | orchestrator | Wednesday 07 January 2026 01:07:40 +0000 (0:00:00.576) 0:00:26.588 ***** 2026-01-07 01:08:18.464142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464166 | orchestrator | 2026-01-07 01:08:18.464171 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-07 01:08:18.464176 | orchestrator | Wednesday 07 January 2026 01:07:41 +0000 (0:00:01.469) 0:00:28.058 ***** 2026-01-07 01:08:18.464181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464186 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.464191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464199 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:18.464208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464213 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:18.464218 | orchestrator | 2026-01-07 01:08:18.464223 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-07 01:08:18.464228 | orchestrator | Wednesday 07 January 2026 01:07:42 +0000 (0:00:01.053) 0:00:29.111 ***** 2026-01-07 01:08:18.464233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464238 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.464244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464249 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:18.464255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464263 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:18.464268 | orchestrator | 2026-01-07 01:08:18.464273 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-07 01:08:18.464279 | orchestrator | Wednesday 07 January 2026 01:07:43 +0000 (0:00:00.727) 0:00:29.839 ***** 2026-01-07 01:08:18.464288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464306 | orchestrator | 2026-01-07 01:08:18.464311 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-07 01:08:18.464317 | orchestrator | Wednesday 07 January 2026 01:07:44 +0000 (0:00:01.283) 0:00:31.123 ***** 2026-01-07 01:08:18.464322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464405 | orchestrator | 2026-01-07 01:08:18.464412 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-07 01:08:18.464417 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:02.399) 0:00:33.522 ***** 2026-01-07 01:08:18.464422 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:08:18.464428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:08:18.464433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:08:18.464438 | orchestrator | 2026-01-07 01:08:18.464444 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-07 01:08:18.464449 | orchestrator | Wednesday 07 January 2026 01:07:48 +0000 (0:00:01.366) 0:00:34.888 ***** 2026-01-07 01:08:18.464454 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:18.464459 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:18.464464 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:18.464469 | orchestrator | 2026-01-07 01:08:18.464473 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-07 01:08:18.464478 | orchestrator | Wednesday 07 January 2026 01:07:49 +0000 (0:00:01.304) 0:00:36.192 ***** 2026-01-07 01:08:18.464484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464493 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:18.464498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464504 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:18.464518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:08:18.464524 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:18.464529 | orchestrator | 2026-01-07 01:08:18.464535 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-07 01:08:18.464540 | orchestrator | Wednesday 07 January 2026 01:07:50 +0000 (0:00:00.517) 0:00:36.710 ***** 2026-01-07 01:08:18.464545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:08:18.464565 | orchestrator | 2026-01-07 01:08:18.464570 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-07 01:08:18.464575 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:01.147) 0:00:37.858 ***** 2026-01-07 01:08:18.464580 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:18.464586 | orchestrator | 2026-01-07 01:08:18.464591 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-07 01:08:18.464596 | orchestrator | Wednesday 07 January 2026 01:07:54 +0000 (0:00:02.462) 0:00:40.320 ***** 2026-01-07 01:08:18.464601 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:18.464606 | orchestrator | 2026-01-07 01:08:18.464612 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-07 01:08:18.464617 | orchestrator | Wednesday 07 January 2026 01:07:56 +0000 (0:00:02.096) 0:00:42.417 ***** 2026-01-07 01:08:18.464621 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:18.464626 | orchestrator | 2026-01-07 01:08:18.464633 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:08:18.464638 | orchestrator | Wednesday 07 January 2026 01:08:08 +0000 (0:00:12.802) 0:00:55.219 ***** 2026-01-07 01:08:18.464643 | orchestrator | 2026-01-07 01:08:18.464647 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:08:18.464652 | orchestrator | Wednesday 07 January 2026 01:08:09 +0000 (0:00:00.067) 0:00:55.286 ***** 2026-01-07 01:08:18.464657 | orchestrator | 2026-01-07 01:08:18.464664 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:08:18.464669 | orchestrator | Wednesday 07 January 2026 01:08:09 +0000 (0:00:00.069) 0:00:55.356 ***** 2026-01-07 01:08:18.464674 | orchestrator | 2026-01-07 01:08:18.464679 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-07 01:08:18.464683 | orchestrator | Wednesday 07 January 2026 01:08:09 +0000 (0:00:00.086) 0:00:55.442 ***** 2026-01-07 01:08:18.464688 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:18.464693 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:18.464698 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:18.464702 | orchestrator | 2026-01-07 01:08:18.464707 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:18.464716 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:08:18.464721 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:08:18.464726 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:08:18.464730 | orchestrator | 2026-01-07 01:08:18.464735 | orchestrator | 2026-01-07 01:08:18.464740 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:18.464745 | orchestrator | Wednesday 07 January 2026 01:08:16 +0000 (0:00:07.473) 0:01:02.916 ***** 2026-01-07 01:08:18.464749 | orchestrator | =============================================================================== 2026-01-07 01:08:18.464754 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.80s 2026-01-07 01:08:18.464759 | orchestrator | placement : Restart placement-api container ----------------------------- 7.47s 2026-01-07 01:08:18.464764 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.30s 2026-01-07 01:08:18.464769 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.70s 2026-01-07 01:08:18.464773 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.23s 2026-01-07 01:08:18.464778 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.23s 2026-01-07 01:08:18.464783 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.05s 2026-01-07 01:08:18.464788 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.02s 2026-01-07 01:08:18.464792 | orchestrator | placement : Creating placement databases -------------------------------- 2.46s 2026-01-07 01:08:18.464797 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.40s 2026-01-07 01:08:18.464802 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2026-01-07 01:08:18.464806 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.47s 2026-01-07 01:08:18.464811 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.37s 2026-01-07 01:08:18.464816 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-01-07 01:08:18.464821 | orchestrator | placement : Copying over config.json files for services ----------------- 1.28s 2026-01-07 01:08:18.464826 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-01-07 01:08:18.464831 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.05s 2026-01-07 01:08:18.464835 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.91s 2026-01-07 01:08:18.464840 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.73s 2026-01-07 01:08:18.464845 | orchestrator | placement : include_tasks ----------------------------------------------- 0.58s 2026-01-07 01:08:18.464849 | orchestrator | 2026-01-07 01:08:18 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:18.464854 | orchestrator | 2026-01-07 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:21.512824 | orchestrator | 2026-01-07 01:08:21 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:21.515327 | orchestrator | 2026-01-07 01:08:21 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:21.518246 | orchestrator | 2026-01-07 01:08:21 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:21.520316 | orchestrator | 2026-01-07 01:08:21 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:21.520359 | orchestrator | 2026-01-07 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:24.563398 | orchestrator | 2026-01-07 01:08:24 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:24.565527 | orchestrator | 2026-01-07 01:08:24 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:24.571544 | orchestrator | 2026-01-07 01:08:24 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:24.573914 | orchestrator | 2026-01-07 01:08:24 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:24.574058 | orchestrator | 2026-01-07 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:27.611865 | orchestrator | 2026-01-07 01:08:27 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:27.614197 | orchestrator | 2026-01-07 01:08:27 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:27.615721 | orchestrator | 2026-01-07 01:08:27 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:27.617579 | orchestrator | 2026-01-07 01:08:27 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:27.617971 | orchestrator | 2026-01-07 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:30.665499 | orchestrator | 2026-01-07 01:08:30 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:30.669511 | orchestrator | 2026-01-07 01:08:30 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:30.672243 | orchestrator | 2026-01-07 01:08:30 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:30.675644 | orchestrator | 2026-01-07 01:08:30 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:30.675693 | orchestrator | 2026-01-07 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:33.726742 | orchestrator | 2026-01-07 01:08:33 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:33.729786 | orchestrator | 2026-01-07 01:08:33 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:33.732006 | orchestrator | 2026-01-07 01:08:33 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:33.733842 | orchestrator | 2026-01-07 01:08:33 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:33.733924 | orchestrator | 2026-01-07 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:36.768285 | orchestrator | 2026-01-07 01:08:36 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:36.768409 | orchestrator | 2026-01-07 01:08:36 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:36.769000 | orchestrator | 2026-01-07 01:08:36 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:36.769926 | orchestrator | 2026-01-07 01:08:36 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:36.769994 | orchestrator | 2026-01-07 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:39.809029 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:39.809887 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:39.811015 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:39.814429 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:39.814468 | orchestrator | 2026-01-07 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:42.859786 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:42.862993 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:42.865499 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:42.867739 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state STARTED 2026-01-07 01:08:42.867793 | orchestrator | 2026-01-07 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:45.920237 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:08:45.922635 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:45.925470 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:45.927891 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:45.929511 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 17870a06-4afc-4f3e-bed1-0bab01b885fa is in state SUCCESS 2026-01-07 01:08:45.929564 | orchestrator | 2026-01-07 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:48.962242 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:08:48.962319 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:48.962841 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state STARTED 2026-01-07 01:08:48.963515 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:48.963555 | orchestrator | 2026-01-07 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:51.990294 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:08:51.993127 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:51.993314 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 95514933-91aa-4799-a5d6-007255640141 is in state SUCCESS 2026-01-07 01:08:51.993551 | orchestrator | 2026-01-07 01:08:51.993560 | orchestrator | 2026-01-07 01:08:51.993564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:51.993568 | orchestrator | 2026-01-07 01:08:51.993571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:51.993574 | orchestrator | Wednesday 07 January 2026 01:08:12 +0000 (0:00:00.285) 0:00:00.285 ***** 2026-01-07 01:08:51.993578 | orchestrator | ok: [testbed-manager] 2026-01-07 01:08:51.993581 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:51.993584 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:51.993588 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:51.993591 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:51.993597 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:51.993602 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:51.993607 | orchestrator | 2026-01-07 01:08:51.993612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:51.993634 | orchestrator | Wednesday 07 January 2026 01:08:13 +0000 (0:00:00.905) 0:00:01.190 ***** 2026-01-07 01:08:51.993640 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993646 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993652 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993655 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993659 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993662 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993665 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:51.993668 | orchestrator | 2026-01-07 01:08:51.993671 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 01:08:51.993674 | orchestrator | 2026-01-07 01:08:51.993678 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-07 01:08:51.993681 | orchestrator | Wednesday 07 January 2026 01:08:14 +0000 (0:00:00.812) 0:00:02.003 ***** 2026-01-07 01:08:51.993685 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:51.993689 | orchestrator | 2026-01-07 01:08:51.993692 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-07 01:08:51.993695 | orchestrator | Wednesday 07 January 2026 01:08:15 +0000 (0:00:01.569) 0:00:03.572 ***** 2026-01-07 01:08:51.993698 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-07 01:08:51.993701 | orchestrator | 2026-01-07 01:08:51.993704 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-07 01:08:51.993707 | orchestrator | Wednesday 07 January 2026 01:08:19 +0000 (0:00:03.321) 0:00:06.893 ***** 2026-01-07 01:08:51.993711 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-07 01:08:51.993715 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-07 01:08:51.993718 | orchestrator | 2026-01-07 01:08:51.993721 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-07 01:08:51.993724 | orchestrator | Wednesday 07 January 2026 01:08:25 +0000 (0:00:06.180) 0:00:13.074 ***** 2026-01-07 01:08:51.993727 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-07 01:08:51.993731 | orchestrator | 2026-01-07 01:08:51.993734 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-07 01:08:51.993737 | orchestrator | Wednesday 07 January 2026 01:08:28 +0000 (0:00:03.012) 0:00:16.086 ***** 2026-01-07 01:08:51.993740 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:08:51.993749 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-07 01:08:51.993752 | orchestrator | 2026-01-07 01:08:51.993755 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-07 01:08:51.993758 | orchestrator | Wednesday 07 January 2026 01:08:32 +0000 (0:00:03.792) 0:00:19.879 ***** 2026-01-07 01:08:51.993761 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-07 01:08:51.993764 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-07 01:08:51.993767 | orchestrator | 2026-01-07 01:08:51.993770 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-07 01:08:51.993773 | orchestrator | Wednesday 07 January 2026 01:08:38 +0000 (0:00:06.355) 0:00:26.235 ***** 2026-01-07 01:08:51.993777 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-07 01:08:51.993780 | orchestrator | 2026-01-07 01:08:51.993783 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:51.993786 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993793 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993796 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993799 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993803 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993811 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993814 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.993817 | orchestrator | 2026-01-07 01:08:51.993820 | orchestrator | 2026-01-07 01:08:51.993823 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:51.993827 | orchestrator | Wednesday 07 January 2026 01:08:43 +0000 (0:00:04.926) 0:00:31.161 ***** 2026-01-07 01:08:51.993830 | orchestrator | =============================================================================== 2026-01-07 01:08:51.993833 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.36s 2026-01-07 01:08:51.993836 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.18s 2026-01-07 01:08:51.993839 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.93s 2026-01-07 01:08:51.993842 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.79s 2026-01-07 01:08:51.993845 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.32s 2026-01-07 01:08:51.993849 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.01s 2026-01-07 01:08:51.993852 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.57s 2026-01-07 01:08:51.993855 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2026-01-07 01:08:51.993858 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-07 01:08:51.993861 | orchestrator | 2026-01-07 01:08:51.993864 | orchestrator | 2026-01-07 01:08:51.993867 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-07 01:08:51.993870 | orchestrator | 2026-01-07 01:08:51.993873 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-07 01:08:51.993876 | orchestrator | Wednesday 07 January 2026 01:05:54 +0000 (0:00:00.092) 0:00:00.092 ***** 2026-01-07 01:08:51.993879 | orchestrator | changed: [localhost] 2026-01-07 01:08:51.993882 | orchestrator | 2026-01-07 01:08:51.993885 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-07 01:08:51.993888 | orchestrator | Wednesday 07 January 2026 01:05:55 +0000 (0:00:01.581) 0:00:01.675 ***** 2026-01-07 01:08:51.993891 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-07 01:08:51.993895 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-07 01:08:51.993898 | orchestrator | 2026-01-07 01:08:51.993901 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:08:51.993940 | orchestrator | changed: [localhost] 2026-01-07 01:08:51.993943 | orchestrator | 2026-01-07 01:08:51.993947 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-07 01:08:51.993950 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:02:38.534) 0:02:40.210 ***** 2026-01-07 01:08:51.993953 | orchestrator | changed: [localhost] 2026-01-07 01:08:51.993956 | orchestrator | 2026-01-07 01:08:51.993959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:51.993962 | orchestrator | 2026-01-07 01:08:51.993965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:51.993971 | orchestrator | Wednesday 07 January 2026 01:08:47 +0000 (0:00:13.610) 0:02:53.820 ***** 2026-01-07 01:08:51.993974 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:51.993977 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:51.993981 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:51.993984 | orchestrator | 2026-01-07 01:08:51.993987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:51.993990 | orchestrator | Wednesday 07 January 2026 01:08:48 +0000 (0:00:00.655) 0:02:54.475 ***** 2026-01-07 01:08:51.993993 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-07 01:08:51.993998 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-07 01:08:51.994002 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-07 01:08:51.994005 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-07 01:08:51.994008 | orchestrator | 2026-01-07 01:08:51.994034 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-07 01:08:51.994038 | orchestrator | skipping: no hosts matched 2026-01-07 01:08:51.994041 | orchestrator | 2026-01-07 01:08:51.994044 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:51.994048 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.994051 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.994054 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.994057 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:51.994060 | orchestrator | 2026-01-07 01:08:51.994063 | orchestrator | 2026-01-07 01:08:51.994066 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:51.994069 | orchestrator | Wednesday 07 January 2026 01:08:49 +0000 (0:00:01.092) 0:02:55.568 ***** 2026-01-07 01:08:51.994073 | orchestrator | =============================================================================== 2026-01-07 01:08:51.994076 | orchestrator | Download ironic-agent initramfs --------------------------------------- 158.53s 2026-01-07 01:08:51.994080 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.61s 2026-01-07 01:08:51.994089 | orchestrator | Ensure the destination directory exists --------------------------------- 1.58s 2026-01-07 01:08:51.994095 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-01-07 01:08:51.994100 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-01-07 01:08:51.994105 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:51.994954 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:08:51.995457 | orchestrator | 2026-01-07 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:55.022830 | orchestrator | 2026-01-07 01:08:55 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:08:55.024014 | orchestrator | 2026-01-07 01:08:55 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:55.025474 | orchestrator | 2026-01-07 01:08:55 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:55.026774 | orchestrator | 2026-01-07 01:08:55 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:08:55.026884 | orchestrator | 2026-01-07 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:58.070067 | orchestrator | 2026-01-07 01:08:58 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:08:58.072658 | orchestrator | 2026-01-07 01:08:58 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state STARTED 2026-01-07 01:08:58.075230 | orchestrator | 2026-01-07 01:08:58 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:08:58.076605 | orchestrator | 2026-01-07 01:08:58 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:08:58.076658 | orchestrator | 2026-01-07 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:01.120677 | orchestrator | 2026-01-07 01:09:01 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:01.122192 | orchestrator | 2026-01-07 01:09:01.122246 | orchestrator | 2026-01-07 01:09:01 | INFO  | Task c0ec4360-7795-4213-9188-b20c734c1c0e is in state SUCCESS 2026-01-07 01:09:01.123504 | orchestrator | 2026-01-07 01:09:01.123546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:09:01.123553 | orchestrator | 2026-01-07 01:09:01.123559 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:09:01.123565 | orchestrator | Wednesday 07 January 2026 01:07:20 +0000 (0:00:00.273) 0:00:00.273 ***** 2026-01-07 01:09:01.123571 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:01.123577 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:01.123582 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:01.123588 | orchestrator | 2026-01-07 01:09:01.123595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:09:01.123600 | orchestrator | Wednesday 07 January 2026 01:07:20 +0000 (0:00:00.302) 0:00:00.576 ***** 2026-01-07 01:09:01.123606 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-07 01:09:01.123612 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-07 01:09:01.123619 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-07 01:09:01.123624 | orchestrator | 2026-01-07 01:09:01.123638 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-07 01:09:01.123645 | orchestrator | 2026-01-07 01:09:01.123651 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:09:01.123657 | orchestrator | Wednesday 07 January 2026 01:07:21 +0000 (0:00:00.445) 0:00:01.021 ***** 2026-01-07 01:09:01.123663 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:01.123669 | orchestrator | 2026-01-07 01:09:01.123676 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-07 01:09:01.123680 | orchestrator | Wednesday 07 January 2026 01:07:21 +0000 (0:00:00.540) 0:00:01.562 ***** 2026-01-07 01:09:01.123684 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-07 01:09:01.123688 | orchestrator | 2026-01-07 01:09:01.123692 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-07 01:09:01.123697 | orchestrator | Wednesday 07 January 2026 01:07:25 +0000 (0:00:03.502) 0:00:05.064 ***** 2026-01-07 01:09:01.123702 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-07 01:09:01.123711 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-07 01:09:01.123717 | orchestrator | 2026-01-07 01:09:01.123721 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-07 01:09:01.123734 | orchestrator | Wednesday 07 January 2026 01:07:31 +0000 (0:00:06.147) 0:00:11.212 ***** 2026-01-07 01:09:01.123740 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:09:01.123746 | orchestrator | 2026-01-07 01:09:01.123752 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-07 01:09:01.123757 | orchestrator | Wednesday 07 January 2026 01:07:34 +0000 (0:00:03.015) 0:00:14.228 ***** 2026-01-07 01:09:01.123776 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:09:01.123782 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-07 01:09:01.123796 | orchestrator | 2026-01-07 01:09:01.123929 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-07 01:09:01.123936 | orchestrator | Wednesday 07 January 2026 01:07:37 +0000 (0:00:03.142) 0:00:17.370 ***** 2026-01-07 01:09:01.123942 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:09:01.123947 | orchestrator | 2026-01-07 01:09:01.123950 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-07 01:09:01.123953 | orchestrator | Wednesday 07 January 2026 01:07:40 +0000 (0:00:03.381) 0:00:20.752 ***** 2026-01-07 01:09:01.123956 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-07 01:09:01.123959 | orchestrator | 2026-01-07 01:09:01.123963 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-07 01:09:01.123966 | orchestrator | Wednesday 07 January 2026 01:07:44 +0000 (0:00:03.734) 0:00:24.486 ***** 2026-01-07 01:09:01.123969 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.123972 | orchestrator | 2026-01-07 01:09:01.123976 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-07 01:09:01.123979 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:03.072) 0:00:27.559 ***** 2026-01-07 01:09:01.123982 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.123985 | orchestrator | 2026-01-07 01:09:01.123988 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-07 01:09:01.123991 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:03.787) 0:00:31.346 ***** 2026-01-07 01:09:01.123994 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.123998 | orchestrator | 2026-01-07 01:09:01.124001 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-07 01:09:01.124004 | orchestrator | Wednesday 07 January 2026 01:07:54 +0000 (0:00:03.232) 0:00:34.579 ***** 2026-01-07 01:09:01.124018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124067 | orchestrator | 2026-01-07 01:09:01.124072 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-07 01:09:01.124077 | orchestrator | Wednesday 07 January 2026 01:07:55 +0000 (0:00:01.252) 0:00:35.832 ***** 2026-01-07 01:09:01.124082 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124087 | orchestrator | 2026-01-07 01:09:01.124092 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-07 01:09:01.124097 | orchestrator | Wednesday 07 January 2026 01:07:56 +0000 (0:00:00.142) 0:00:35.974 ***** 2026-01-07 01:09:01.124101 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124106 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:01.124111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:01.124115 | orchestrator | 2026-01-07 01:09:01.124120 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-07 01:09:01.124125 | orchestrator | Wednesday 07 January 2026 01:07:56 +0000 (0:00:00.526) 0:00:36.501 ***** 2026-01-07 01:09:01.124137 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:09:01.124142 | orchestrator | 2026-01-07 01:09:01.124146 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-07 01:09:01.124150 | orchestrator | Wednesday 07 January 2026 01:07:57 +0000 (0:00:00.910) 0:00:37.411 ***** 2026-01-07 01:09:01.124156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124252 | orchestrator | 2026-01-07 01:09:01.124256 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-07 01:09:01.124261 | orchestrator | Wednesday 07 January 2026 01:07:59 +0000 (0:00:02.165) 0:00:39.576 ***** 2026-01-07 01:09:01.124266 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:01.124271 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:01.124276 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:01.124281 | orchestrator | 2026-01-07 01:09:01.124286 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:09:01.124291 | orchestrator | Wednesday 07 January 2026 01:07:59 +0000 (0:00:00.287) 0:00:39.863 ***** 2026-01-07 01:09:01.124296 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:01.124302 | orchestrator | 2026-01-07 01:09:01.124306 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-07 01:09:01.124311 | orchestrator | Wednesday 07 January 2026 01:08:00 +0000 (0:00:00.747) 0:00:40.610 ***** 2026-01-07 01:09:01.124316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124363 | orchestrator | 2026-01-07 01:09:01.124368 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-07 01:09:01.124373 | orchestrator | Wednesday 07 January 2026 01:08:02 +0000 (0:00:02.251) 0:00:42.862 ***** 2026-01-07 01:09:01.124382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124398 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124414 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:01.124419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124437 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:01.124442 | orchestrator | 2026-01-07 01:09:01.124447 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-07 01:09:01.124452 | orchestrator | Wednesday 07 January 2026 01:08:03 +0000 (0:00:00.619) 0:00:43.482 ***** 2026-01-07 01:09:01.124460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:01.124500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124513 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:01.124519 | orchestrator | 2026-01-07 01:09:01.124524 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-07 01:09:01.124528 | orchestrator | Wednesday 07 January 2026 01:08:04 +0000 (0:00:00.981) 0:00:44.464 ***** 2026-01-07 01:09:01.124533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124576 | orchestrator | 2026-01-07 01:09:01.124582 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-07 01:09:01.124587 | orchestrator | Wednesday 07 January 2026 01:08:06 +0000 (0:00:02.341) 0:00:46.805 ***** 2026-01-07 01:09:01.124593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124651 | orchestrator | 2026-01-07 01:09:01.124657 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-07 01:09:01.124663 | orchestrator | Wednesday 07 January 2026 01:08:12 +0000 (0:00:05.561) 0:00:52.367 ***** 2026-01-07 01:09:01.124673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124687 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124707 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:01.124712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:09:01.124721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:09:01.124727 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:01.124733 | orchestrator | 2026-01-07 01:09:01.124738 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-07 01:09:01.124743 | orchestrator | Wednesday 07 January 2026 01:08:13 +0000 (0:00:00.638) 0:00:53.005 ***** 2026-01-07 01:09:01.124823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:09:01.124852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:01.124876 | orchestrator | 2026-01-07 01:09:01.124881 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:09:01.124886 | orchestrator | Wednesday 07 January 2026 01:08:15 +0000 (0:00:02.178) 0:00:55.183 ***** 2026-01-07 01:09:01.124892 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:01.124898 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:01.124903 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:01.124908 | orchestrator | 2026-01-07 01:09:01.124913 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-07 01:09:01.124918 | orchestrator | Wednesday 07 January 2026 01:08:15 +0000 (0:00:00.310) 0:00:55.494 ***** 2026-01-07 01:09:01.124925 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.124931 | orchestrator | 2026-01-07 01:09:01.124935 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-07 01:09:01.124944 | orchestrator | Wednesday 07 January 2026 01:08:17 +0000 (0:00:01.791) 0:00:57.286 ***** 2026-01-07 01:09:01.124950 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.124955 | orchestrator | 2026-01-07 01:09:01.124960 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-07 01:09:01.124966 | orchestrator | Wednesday 07 January 2026 01:08:19 +0000 (0:00:02.057) 0:00:59.343 ***** 2026-01-07 01:09:01.124971 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.124977 | orchestrator | 2026-01-07 01:09:01.124982 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:09:01.124988 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:00:15.349) 0:01:14.692 ***** 2026-01-07 01:09:01.124993 | orchestrator | 2026-01-07 01:09:01.124998 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:09:01.125003 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:00:00.065) 0:01:14.757 ***** 2026-01-07 01:09:01.125009 | orchestrator | 2026-01-07 01:09:01.125014 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:09:01.125020 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:00:00.068) 0:01:14.826 ***** 2026-01-07 01:09:01.125025 | orchestrator | 2026-01-07 01:09:01.125030 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-07 01:09:01.125036 | orchestrator | Wednesday 07 January 2026 01:08:35 +0000 (0:00:00.067) 0:01:14.894 ***** 2026-01-07 01:09:01.125041 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.125046 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:01.125052 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:01.125057 | orchestrator | 2026-01-07 01:09:01.125062 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-07 01:09:01.125067 | orchestrator | Wednesday 07 January 2026 01:08:46 +0000 (0:00:11.266) 0:01:26.161 ***** 2026-01-07 01:09:01.125073 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:01.125078 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:01.125083 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:01.125088 | orchestrator | 2026-01-07 01:09:01.125093 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:09:01.125099 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:09:01.125106 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:09:01.125112 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:09:01.125117 | orchestrator | 2026-01-07 01:09:01.125122 | orchestrator | 2026-01-07 01:09:01.125128 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:09:01.125133 | orchestrator | Wednesday 07 January 2026 01:09:00 +0000 (0:00:14.028) 0:01:40.189 ***** 2026-01-07 01:09:01.125139 | orchestrator | =============================================================================== 2026-01-07 01:09:01.125144 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.35s 2026-01-07 01:09:01.125156 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.03s 2026-01-07 01:09:01.125163 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 11.27s 2026-01-07 01:09:01.125168 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.15s 2026-01-07 01:09:01.125173 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.56s 2026-01-07 01:09:01.125178 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.79s 2026-01-07 01:09:01.125198 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.73s 2026-01-07 01:09:01.125203 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.50s 2026-01-07 01:09:01.125214 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.38s 2026-01-07 01:09:01.125220 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.23s 2026-01-07 01:09:01.125226 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.14s 2026-01-07 01:09:01.125234 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.07s 2026-01-07 01:09:01.125241 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.02s 2026-01-07 01:09:01.125247 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.34s 2026-01-07 01:09:01.125253 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.25s 2026-01-07 01:09:01.125259 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.18s 2026-01-07 01:09:01.125262 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.17s 2026-01-07 01:09:01.125266 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.06s 2026-01-07 01:09:01.125271 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.79s 2026-01-07 01:09:01.125275 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.25s 2026-01-07 01:09:01.125279 | orchestrator | 2026-01-07 01:09:01 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:01.125283 | orchestrator | 2026-01-07 01:09:01 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:01.125287 | orchestrator | 2026-01-07 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:04.163751 | orchestrator | 2026-01-07 01:09:04 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:04.164841 | orchestrator | 2026-01-07 01:09:04 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:04.166127 | orchestrator | 2026-01-07 01:09:04 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:04.169099 | orchestrator | 2026-01-07 01:09:04 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:04.169149 | orchestrator | 2026-01-07 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:07.200154 | orchestrator | 2026-01-07 01:09:07 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:07.200693 | orchestrator | 2026-01-07 01:09:07 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:07.201307 | orchestrator | 2026-01-07 01:09:07 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:07.202169 | orchestrator | 2026-01-07 01:09:07 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:07.202256 | orchestrator | 2026-01-07 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:10.230394 | orchestrator | 2026-01-07 01:09:10 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:10.232063 | orchestrator | 2026-01-07 01:09:10 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:10.232472 | orchestrator | 2026-01-07 01:09:10 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:10.233348 | orchestrator | 2026-01-07 01:09:10 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:10.233392 | orchestrator | 2026-01-07 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:13.268162 | orchestrator | 2026-01-07 01:09:13 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:13.269419 | orchestrator | 2026-01-07 01:09:13 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:13.270399 | orchestrator | 2026-01-07 01:09:13 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:13.271456 | orchestrator | 2026-01-07 01:09:13 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:13.271536 | orchestrator | 2026-01-07 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:16.313622 | orchestrator | 2026-01-07 01:09:16 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:16.313965 | orchestrator | 2026-01-07 01:09:16 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:16.314632 | orchestrator | 2026-01-07 01:09:16 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:16.315551 | orchestrator | 2026-01-07 01:09:16 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:16.315581 | orchestrator | 2026-01-07 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:19.355355 | orchestrator | 2026-01-07 01:09:19 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:19.357339 | orchestrator | 2026-01-07 01:09:19 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:19.357386 | orchestrator | 2026-01-07 01:09:19 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:19.357998 | orchestrator | 2026-01-07 01:09:19 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:19.358057 | orchestrator | 2026-01-07 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:22.385324 | orchestrator | 2026-01-07 01:09:22 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:22.386107 | orchestrator | 2026-01-07 01:09:22 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:22.386899 | orchestrator | 2026-01-07 01:09:22 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:22.387638 | orchestrator | 2026-01-07 01:09:22 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:22.387663 | orchestrator | 2026-01-07 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:25.425666 | orchestrator | 2026-01-07 01:09:25 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:25.427777 | orchestrator | 2026-01-07 01:09:25 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:25.429367 | orchestrator | 2026-01-07 01:09:25 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:25.431984 | orchestrator | 2026-01-07 01:09:25 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:25.432947 | orchestrator | 2026-01-07 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:28.470094 | orchestrator | 2026-01-07 01:09:28 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:28.470711 | orchestrator | 2026-01-07 01:09:28 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:28.471542 | orchestrator | 2026-01-07 01:09:28 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:28.472444 | orchestrator | 2026-01-07 01:09:28 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:28.472462 | orchestrator | 2026-01-07 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:31.518116 | orchestrator | 2026-01-07 01:09:31 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:31.519126 | orchestrator | 2026-01-07 01:09:31 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:31.520280 | orchestrator | 2026-01-07 01:09:31 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:31.521323 | orchestrator | 2026-01-07 01:09:31 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:31.521366 | orchestrator | 2026-01-07 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:34.561603 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:34.562174 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:34.563738 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:34.564710 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:34.564762 | orchestrator | 2026-01-07 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:37.606092 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:37.607560 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:37.609176 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:37.610728 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:37.610790 | orchestrator | 2026-01-07 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:40.646669 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:40.648245 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:40.650713 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:40.651764 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:40.651821 | orchestrator | 2026-01-07 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:43.687094 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:43.687979 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:43.688869 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:43.690487 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:43.690520 | orchestrator | 2026-01-07 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:46.725846 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:46.726663 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:46.728230 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:46.728697 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:46.728711 | orchestrator | 2026-01-07 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:49.760573 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:49.762643 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:49.765215 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:49.766287 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:49.766335 | orchestrator | 2026-01-07 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:52.801116 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:52.801626 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:52.802466 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:52.803384 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:52.803985 | orchestrator | 2026-01-07 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:55.842915 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:55.845915 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:55.847303 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:55.848648 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:55.848679 | orchestrator | 2026-01-07 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:58.879984 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:09:58.881508 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:09:58.882632 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:09:58.883502 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:09:58.883532 | orchestrator | 2026-01-07 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:01.931844 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:01.933658 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:01.934589 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:01.936908 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:01.936944 | orchestrator | 2026-01-07 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:04.987111 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:04.990967 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:04.993380 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:04.994592 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:04.994983 | orchestrator | 2026-01-07 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:08.042951 | orchestrator | 2026-01-07 01:10:08 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:08.043054 | orchestrator | 2026-01-07 01:10:08 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:08.046160 | orchestrator | 2026-01-07 01:10:08 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:08.047068 | orchestrator | 2026-01-07 01:10:08 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:08.047105 | orchestrator | 2026-01-07 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:11.084559 | orchestrator | 2026-01-07 01:10:11 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:11.086253 | orchestrator | 2026-01-07 01:10:11 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:11.088223 | orchestrator | 2026-01-07 01:10:11 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:11.090143 | orchestrator | 2026-01-07 01:10:11 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:11.090186 | orchestrator | 2026-01-07 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:14.140974 | orchestrator | 2026-01-07 01:10:14 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:14.142839 | orchestrator | 2026-01-07 01:10:14 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:14.144366 | orchestrator | 2026-01-07 01:10:14 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:14.146325 | orchestrator | 2026-01-07 01:10:14 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:14.146370 | orchestrator | 2026-01-07 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:17.200302 | orchestrator | 2026-01-07 01:10:17 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:17.203043 | orchestrator | 2026-01-07 01:10:17 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:17.205020 | orchestrator | 2026-01-07 01:10:17 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:17.208264 | orchestrator | 2026-01-07 01:10:17 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:17.208328 | orchestrator | 2026-01-07 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:20.260348 | orchestrator | 2026-01-07 01:10:20 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:20.262168 | orchestrator | 2026-01-07 01:10:20 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:20.263939 | orchestrator | 2026-01-07 01:10:20 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:20.265680 | orchestrator | 2026-01-07 01:10:20 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:20.265729 | orchestrator | 2026-01-07 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:23.313670 | orchestrator | 2026-01-07 01:10:23 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:23.315380 | orchestrator | 2026-01-07 01:10:23 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:23.318930 | orchestrator | 2026-01-07 01:10:23 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:23.321860 | orchestrator | 2026-01-07 01:10:23 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:23.321900 | orchestrator | 2026-01-07 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:26.364058 | orchestrator | 2026-01-07 01:10:26 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:26.364142 | orchestrator | 2026-01-07 01:10:26 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:26.365956 | orchestrator | 2026-01-07 01:10:26 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:26.366700 | orchestrator | 2026-01-07 01:10:26 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:26.366790 | orchestrator | 2026-01-07 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:29.415330 | orchestrator | 2026-01-07 01:10:29 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:29.422296 | orchestrator | 2026-01-07 01:10:29 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:29.422421 | orchestrator | 2026-01-07 01:10:29 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:29.427640 | orchestrator | 2026-01-07 01:10:29 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:29.427693 | orchestrator | 2026-01-07 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:32.476591 | orchestrator | 2026-01-07 01:10:32 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:32.478736 | orchestrator | 2026-01-07 01:10:32 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:32.481898 | orchestrator | 2026-01-07 01:10:32 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:32.484332 | orchestrator | 2026-01-07 01:10:32 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:32.484988 | orchestrator | 2026-01-07 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:35.534144 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:35.535059 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:35.537846 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:35.541062 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:35.541103 | orchestrator | 2026-01-07 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:38.592214 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:38.600904 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:38.603956 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:38.606253 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:38.606296 | orchestrator | 2026-01-07 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:41.656259 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:41.657961 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:41.661668 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:41.663946 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:41.664018 | orchestrator | 2026-01-07 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:44.717491 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:44.718396 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:44.720536 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:44.722459 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:44.722841 | orchestrator | 2026-01-07 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:47.772519 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:47.776598 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:47.777377 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:47.778055 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:47.778075 | orchestrator | 2026-01-07 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:50.808337 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:50.808654 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:50.809590 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:50.811808 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:50.811844 | orchestrator | 2026-01-07 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:53.854581 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:53.857312 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:53.859555 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:53.860810 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:53.860845 | orchestrator | 2026-01-07 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:56.920632 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:56.921412 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:56.922614 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:56.924069 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:56.924125 | orchestrator | 2026-01-07 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:59.965539 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:10:59.968948 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:10:59.970280 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:10:59.971234 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:10:59.971485 | orchestrator | 2026-01-07 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:03.039528 | orchestrator | 2026-01-07 01:11:03 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:03.039592 | orchestrator | 2026-01-07 01:11:03 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:03.040275 | orchestrator | 2026-01-07 01:11:03 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:03.042518 | orchestrator | 2026-01-07 01:11:03 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:03.042577 | orchestrator | 2026-01-07 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:06.090098 | orchestrator | 2026-01-07 01:11:06 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:06.094054 | orchestrator | 2026-01-07 01:11:06 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:06.097427 | orchestrator | 2026-01-07 01:11:06 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:06.102592 | orchestrator | 2026-01-07 01:11:06 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:06.103823 | orchestrator | 2026-01-07 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:09.152578 | orchestrator | 2026-01-07 01:11:09 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:09.157140 | orchestrator | 2026-01-07 01:11:09 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:09.159103 | orchestrator | 2026-01-07 01:11:09 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:09.166059 | orchestrator | 2026-01-07 01:11:09 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:09.166124 | orchestrator | 2026-01-07 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:12.210627 | orchestrator | 2026-01-07 01:11:12 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:12.213477 | orchestrator | 2026-01-07 01:11:12 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:12.215718 | orchestrator | 2026-01-07 01:11:12 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:12.219485 | orchestrator | 2026-01-07 01:11:12 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:12.219885 | orchestrator | 2026-01-07 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:15.266149 | orchestrator | 2026-01-07 01:11:15 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:15.267976 | orchestrator | 2026-01-07 01:11:15 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:15.268148 | orchestrator | 2026-01-07 01:11:15 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:15.269539 | orchestrator | 2026-01-07 01:11:15 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:15.269663 | orchestrator | 2026-01-07 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:18.317421 | orchestrator | 2026-01-07 01:11:18 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:18.320084 | orchestrator | 2026-01-07 01:11:18 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:18.321454 | orchestrator | 2026-01-07 01:11:18 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state STARTED 2026-01-07 01:11:18.323257 | orchestrator | 2026-01-07 01:11:18 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:18.323421 | orchestrator | 2026-01-07 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:21.377066 | orchestrator | 2026-01-07 01:11:21 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:21.379310 | orchestrator | 2026-01-07 01:11:21 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:21.382096 | orchestrator | 2026-01-07 01:11:21 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:21.385270 | orchestrator | 2026-01-07 01:11:21 | INFO  | Task 688f3676-4cf4-4cd4-a4e2-615a0607e7b2 is in state SUCCESS 2026-01-07 01:11:21.387088 | orchestrator | 2026-01-07 01:11:21.387143 | orchestrator | 2026-01-07 01:11:21.387150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:11:21.387156 | orchestrator | 2026-01-07 01:11:21.387161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:11:21.387166 | orchestrator | Wednesday 07 January 2026 01:08:21 +0000 (0:00:00.265) 0:00:00.265 ***** 2026-01-07 01:11:21.387172 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:21.387177 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:11:21.387182 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:11:21.387187 | orchestrator | 2026-01-07 01:11:21.387191 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:11:21.387196 | orchestrator | Wednesday 07 January 2026 01:08:21 +0000 (0:00:00.319) 0:00:00.585 ***** 2026-01-07 01:11:21.387201 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-07 01:11:21.387207 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-07 01:11:21.387212 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-07 01:11:21.387244 | orchestrator | 2026-01-07 01:11:21.387250 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-07 01:11:21.387266 | orchestrator | 2026-01-07 01:11:21.387271 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:11:21.387276 | orchestrator | Wednesday 07 January 2026 01:08:22 +0000 (0:00:00.565) 0:00:01.151 ***** 2026-01-07 01:11:21.387281 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:21.387286 | orchestrator | 2026-01-07 01:11:21.387290 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-07 01:11:21.387295 | orchestrator | Wednesday 07 January 2026 01:08:23 +0000 (0:00:00.574) 0:00:01.725 ***** 2026-01-07 01:11:21.387301 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-07 01:11:21.387306 | orchestrator | 2026-01-07 01:11:21.387320 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-07 01:11:21.387341 | orchestrator | Wednesday 07 January 2026 01:08:26 +0000 (0:00:03.249) 0:00:04.974 ***** 2026-01-07 01:11:21.387347 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-07 01:11:21.387353 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-07 01:11:21.387358 | orchestrator | 2026-01-07 01:11:21.387363 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-07 01:11:21.387368 | orchestrator | Wednesday 07 January 2026 01:08:31 +0000 (0:00:05.351) 0:00:10.326 ***** 2026-01-07 01:11:21.387373 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:11:21.387378 | orchestrator | 2026-01-07 01:11:21.387382 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-07 01:11:21.387387 | orchestrator | Wednesday 07 January 2026 01:08:35 +0000 (0:00:03.680) 0:00:14.006 ***** 2026-01-07 01:11:21.387392 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:11:21.387397 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-07 01:11:21.387402 | orchestrator | 2026-01-07 01:11:21.387407 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-07 01:11:21.387412 | orchestrator | Wednesday 07 January 2026 01:08:39 +0000 (0:00:04.377) 0:00:18.384 ***** 2026-01-07 01:11:21.387417 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:11:21.387422 | orchestrator | 2026-01-07 01:11:21.387467 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-07 01:11:21.387475 | orchestrator | Wednesday 07 January 2026 01:08:42 +0000 (0:00:03.179) 0:00:21.564 ***** 2026-01-07 01:11:21.387481 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-07 01:11:21.387487 | orchestrator | 2026-01-07 01:11:21.387492 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-07 01:11:21.387497 | orchestrator | Wednesday 07 January 2026 01:08:45 +0000 (0:00:03.067) 0:00:24.631 ***** 2026-01-07 01:11:21.387554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387585 | orchestrator | 2026-01-07 01:11:21.387590 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:11:21.387595 | orchestrator | Wednesday 07 January 2026 01:08:51 +0000 (0:00:05.693) 0:00:30.325 ***** 2026-01-07 01:11:21.387600 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:21.387606 | orchestrator | 2026-01-07 01:11:21.387611 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-07 01:11:21.387620 | orchestrator | Wednesday 07 January 2026 01:08:52 +0000 (0:00:00.690) 0:00:31.015 ***** 2026-01-07 01:11:21.387626 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.387631 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:21.387637 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:21.387642 | orchestrator | 2026-01-07 01:11:21.387648 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-07 01:11:21.387657 | orchestrator | Wednesday 07 January 2026 01:08:55 +0000 (0:00:03.393) 0:00:34.409 ***** 2026-01-07 01:11:21.387662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387673 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387678 | orchestrator | 2026-01-07 01:11:21.387683 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-07 01:11:21.387688 | orchestrator | Wednesday 07 January 2026 01:08:57 +0000 (0:00:01.640) 0:00:36.050 ***** 2026-01-07 01:11:21.387693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:21.387708 | orchestrator | 2026-01-07 01:11:21.387715 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:11:21.387721 | orchestrator | Wednesday 07 January 2026 01:08:58 +0000 (0:00:01.222) 0:00:37.273 ***** 2026-01-07 01:11:21.387726 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:21.387731 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:11:21.387736 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:11:21.387741 | orchestrator | 2026-01-07 01:11:21.387747 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-07 01:11:21.387752 | orchestrator | Wednesday 07 January 2026 01:08:59 +0000 (0:00:00.576) 0:00:37.849 ***** 2026-01-07 01:11:21.387757 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.387762 | orchestrator | 2026-01-07 01:11:21.387767 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-07 01:11:21.387771 | orchestrator | Wednesday 07 January 2026 01:08:59 +0000 (0:00:00.320) 0:00:38.170 ***** 2026-01-07 01:11:21.387777 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.387782 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.387787 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.387792 | orchestrator | 2026-01-07 01:11:21.387796 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:11:21.387801 | orchestrator | Wednesday 07 January 2026 01:08:59 +0000 (0:00:00.303) 0:00:38.473 ***** 2026-01-07 01:11:21.387821 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:21.387826 | orchestrator | 2026-01-07 01:11:21.387830 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-07 01:11:21.387835 | orchestrator | Wednesday 07 January 2026 01:09:00 +0000 (0:00:00.554) 0:00:39.028 ***** 2026-01-07 01:11:21.387841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.387874 | orchestrator | 2026-01-07 01:11:21.387879 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-07 01:11:21.387883 | orchestrator | Wednesday 07 January 2026 01:09:04 +0000 (0:00:04.042) 0:00:43.070 ***** 2026-01-07 01:11:21.387892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387898 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.387905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387911 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.387920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387929 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.387933 | orchestrator | 2026-01-07 01:11:21.387938 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-07 01:11:21.387944 | orchestrator | Wednesday 07 January 2026 01:09:07 +0000 (0:00:03.055) 0:00:46.126 ***** 2026-01-07 01:11:21.387951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387957 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.387962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387972 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.387988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:11:21.387995 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388000 | orchestrator | 2026-01-07 01:11:21.388004 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-07 01:11:21.388008 | orchestrator | Wednesday 07 January 2026 01:09:10 +0000 (0:00:03.411) 0:00:49.538 ***** 2026-01-07 01:11:21.388013 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388018 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388023 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388028 | orchestrator | 2026-01-07 01:11:21.388033 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-07 01:11:21.388038 | orchestrator | Wednesday 07 January 2026 01:09:14 +0000 (0:00:03.862) 0:00:53.401 ***** 2026-01-07 01:11:21.388043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388074 | orchestrator | 2026-01-07 01:11:21.388079 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-07 01:11:21.388084 | orchestrator | Wednesday 07 January 2026 01:09:18 +0000 (0:00:04.070) 0:00:57.471 ***** 2026-01-07 01:11:21.388089 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388093 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:21.388098 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:21.388103 | orchestrator | 2026-01-07 01:11:21.388107 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-07 01:11:21.388112 | orchestrator | Wednesday 07 January 2026 01:09:25 +0000 (0:00:06.443) 0:01:03.915 ***** 2026-01-07 01:11:21.388117 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388122 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388126 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388131 | orchestrator | 2026-01-07 01:11:21.388136 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-07 01:11:21.388141 | orchestrator | Wednesday 07 January 2026 01:09:29 +0000 (0:00:04.428) 0:01:08.343 ***** 2026-01-07 01:11:21.388146 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388150 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388156 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388160 | orchestrator | 2026-01-07 01:11:21.388165 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-07 01:11:21.388170 | orchestrator | Wednesday 07 January 2026 01:09:33 +0000 (0:00:04.070) 0:01:12.414 ***** 2026-01-07 01:11:21.388175 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388289 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388301 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388305 | orchestrator | 2026-01-07 01:11:21.388310 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-07 01:11:21.388315 | orchestrator | Wednesday 07 January 2026 01:09:37 +0000 (0:00:03.578) 0:01:15.992 ***** 2026-01-07 01:11:21.388319 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388324 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388329 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388334 | orchestrator | 2026-01-07 01:11:21.388338 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-07 01:11:21.388344 | orchestrator | Wednesday 07 January 2026 01:09:41 +0000 (0:00:04.177) 0:01:20.170 ***** 2026-01-07 01:11:21.388350 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388355 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388359 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388365 | orchestrator | 2026-01-07 01:11:21.388370 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-07 01:11:21.388375 | orchestrator | Wednesday 07 January 2026 01:09:41 +0000 (0:00:00.340) 0:01:20.511 ***** 2026-01-07 01:11:21.388381 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:11:21.388387 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388392 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:11:21.388397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388402 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:11:21.388413 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388418 | orchestrator | 2026-01-07 01:11:21.388428 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-07 01:11:21.388433 | orchestrator | Wednesday 07 January 2026 01:09:46 +0000 (0:00:04.291) 0:01:24.802 ***** 2026-01-07 01:11:21.388438 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:21.388443 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:21.388448 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388454 | orchestrator | 2026-01-07 01:11:21.388458 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-07 01:11:21.388464 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:03.971) 0:01:28.774 ***** 2026-01-07 01:11:21.388470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:11:21.388502 | orchestrator | 2026-01-07 01:11:21.388507 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:11:21.388511 | orchestrator | Wednesday 07 January 2026 01:09:55 +0000 (0:00:05.106) 0:01:33.880 ***** 2026-01-07 01:11:21.388516 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:21.388521 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:21.388526 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:21.388531 | orchestrator | 2026-01-07 01:11:21.388536 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-07 01:11:21.388540 | orchestrator | Wednesday 07 January 2026 01:09:55 +0000 (0:00:00.341) 0:01:34.222 ***** 2026-01-07 01:11:21.388545 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388550 | orchestrator | 2026-01-07 01:11:21.388555 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-07 01:11:21.388559 | orchestrator | Wednesday 07 January 2026 01:09:57 +0000 (0:00:01.959) 0:01:36.181 ***** 2026-01-07 01:11:21.388564 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388569 | orchestrator | 2026-01-07 01:11:21.388574 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-07 01:11:21.388579 | orchestrator | Wednesday 07 January 2026 01:09:59 +0000 (0:00:02.236) 0:01:38.418 ***** 2026-01-07 01:11:21.388583 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388588 | orchestrator | 2026-01-07 01:11:21.388594 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-07 01:11:21.388599 | orchestrator | Wednesday 07 January 2026 01:10:01 +0000 (0:00:01.931) 0:01:40.350 ***** 2026-01-07 01:11:21.388604 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388608 | orchestrator | 2026-01-07 01:11:21.388613 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-07 01:11:21.388618 | orchestrator | Wednesday 07 January 2026 01:10:40 +0000 (0:00:38.593) 0:02:18.943 ***** 2026-01-07 01:11:21.388623 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388628 | orchestrator | 2026-01-07 01:11:21.388633 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:11:21.388639 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:02.987) 0:02:21.930 ***** 2026-01-07 01:11:21.388649 | orchestrator | 2026-01-07 01:11:21.388660 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:11:21.388665 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:00.279) 0:02:22.210 ***** 2026-01-07 01:11:21.388670 | orchestrator | 2026-01-07 01:11:21.388675 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:11:21.388680 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:00.063) 0:02:22.273 ***** 2026-01-07 01:11:21.388685 | orchestrator | 2026-01-07 01:11:21.388690 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-07 01:11:21.388695 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:00.067) 0:02:22.341 ***** 2026-01-07 01:11:21.388700 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:21.388705 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:21.388711 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:21.388716 | orchestrator | 2026-01-07 01:11:21.388721 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:11:21.388726 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:11:21.388732 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:11:21.388737 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:11:21.388742 | orchestrator | 2026-01-07 01:11:21.388748 | orchestrator | 2026-01-07 01:11:21.388753 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:11:21.388761 | orchestrator | Wednesday 07 January 2026 01:11:17 +0000 (0:00:34.292) 0:02:56.633 ***** 2026-01-07 01:11:21.388766 | orchestrator | =============================================================================== 2026-01-07 01:11:21.388771 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 38.59s 2026-01-07 01:11:21.388776 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.29s 2026-01-07 01:11:21.388781 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.44s 2026-01-07 01:11:21.388786 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.69s 2026-01-07 01:11:21.388791 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.35s 2026-01-07 01:11:21.388796 | orchestrator | glance : Check glance containers ---------------------------------------- 5.11s 2026-01-07 01:11:21.388801 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.43s 2026-01-07 01:11:21.388821 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.38s 2026-01-07 01:11:21.388826 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.29s 2026-01-07 01:11:21.388831 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.18s 2026-01-07 01:11:21.388836 | orchestrator | glance : Copying over config.json files for services -------------------- 4.07s 2026-01-07 01:11:21.388840 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.07s 2026-01-07 01:11:21.388845 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.04s 2026-01-07 01:11:21.388849 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.97s 2026-01-07 01:11:21.388854 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.86s 2026-01-07 01:11:21.388859 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.68s 2026-01-07 01:11:21.388863 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.58s 2026-01-07 01:11:21.388868 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.41s 2026-01-07 01:11:21.388878 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.39s 2026-01-07 01:11:21.388883 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.25s 2026-01-07 01:11:21.388888 | orchestrator | 2026-01-07 01:11:21 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:21.388891 | orchestrator | 2026-01-07 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:24.435790 | orchestrator | 2026-01-07 01:11:24 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:24.437707 | orchestrator | 2026-01-07 01:11:24 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:24.440573 | orchestrator | 2026-01-07 01:11:24 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:24.442595 | orchestrator | 2026-01-07 01:11:24 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:24.442639 | orchestrator | 2026-01-07 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:27.491350 | orchestrator | 2026-01-07 01:11:27 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:27.492972 | orchestrator | 2026-01-07 01:11:27 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:27.495010 | orchestrator | 2026-01-07 01:11:27 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:27.496932 | orchestrator | 2026-01-07 01:11:27 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:27.497020 | orchestrator | 2026-01-07 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:30.546325 | orchestrator | 2026-01-07 01:11:30 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:30.546931 | orchestrator | 2026-01-07 01:11:30 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:30.547473 | orchestrator | 2026-01-07 01:11:30 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:30.548397 | orchestrator | 2026-01-07 01:11:30 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:30.548500 | orchestrator | 2026-01-07 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:33.599569 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:33.601907 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:33.604257 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state STARTED 2026-01-07 01:11:33.605700 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:33.605729 | orchestrator | 2026-01-07 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:36.653669 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:36.656096 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:36.659889 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task bfc3f706-5092-48f8-b001-a05a6c27ee0a is in state SUCCESS 2026-01-07 01:11:36.662189 | orchestrator | 2026-01-07 01:11:36.662264 | orchestrator | 2026-01-07 01:11:36.662274 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:11:36.662283 | orchestrator | 2026-01-07 01:11:36.662291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:11:36.662327 | orchestrator | Wednesday 07 January 2026 01:09:05 +0000 (0:00:00.305) 0:00:00.305 ***** 2026-01-07 01:11:36.662336 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:36.662345 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:11:36.662352 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:11:36.662360 | orchestrator | 2026-01-07 01:11:36.662368 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:11:36.662376 | orchestrator | Wednesday 07 January 2026 01:09:05 +0000 (0:00:00.414) 0:00:00.720 ***** 2026-01-07 01:11:36.662384 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-07 01:11:36.662393 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-07 01:11:36.662401 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-07 01:11:36.662409 | orchestrator | 2026-01-07 01:11:36.662418 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-07 01:11:36.662426 | orchestrator | 2026-01-07 01:11:36.662433 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:11:36.662439 | orchestrator | Wednesday 07 January 2026 01:09:06 +0000 (0:00:00.596) 0:00:01.317 ***** 2026-01-07 01:11:36.662446 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:36.662456 | orchestrator | 2026-01-07 01:11:36.662464 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-07 01:11:36.662473 | orchestrator | Wednesday 07 January 2026 01:09:07 +0000 (0:00:00.523) 0:00:01.840 ***** 2026-01-07 01:11:36.662484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662514 | orchestrator | 2026-01-07 01:11:36.662524 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-07 01:11:36.662532 | orchestrator | Wednesday 07 January 2026 01:09:07 +0000 (0:00:00.720) 0:00:02.560 ***** 2026-01-07 01:11:36.662542 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-07 01:11:36.662560 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-07 01:11:36.662567 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:11:36.662572 | orchestrator | 2026-01-07 01:11:36.662588 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:11:36.662593 | orchestrator | Wednesday 07 January 2026 01:09:08 +0000 (0:00:00.963) 0:00:03.524 ***** 2026-01-07 01:11:36.662598 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:36.662604 | orchestrator | 2026-01-07 01:11:36.662609 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-07 01:11:36.662615 | orchestrator | Wednesday 07 January 2026 01:09:09 +0000 (0:00:00.830) 0:00:04.354 ***** 2026-01-07 01:11:36.662631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662648 | orchestrator | 2026-01-07 01:11:36.662653 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-07 01:11:36.662659 | orchestrator | Wednesday 07 January 2026 01:09:11 +0000 (0:00:01.679) 0:00:06.033 ***** 2026-01-07 01:11:36.662664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662670 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.662675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662684 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.662699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662704 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.662710 | orchestrator | 2026-01-07 01:11:36.662715 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-07 01:11:36.662720 | orchestrator | Wednesday 07 January 2026 01:09:11 +0000 (0:00:00.555) 0:00:06.589 ***** 2026-01-07 01:11:36.662725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662731 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.662736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662741 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.662747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:11:36.662754 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.662762 | orchestrator | 2026-01-07 01:11:36.662770 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-07 01:11:36.662778 | orchestrator | Wednesday 07 January 2026 01:09:12 +0000 (0:00:01.028) 0:00:07.617 ***** 2026-01-07 01:11:36.662795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662834 | orchestrator | 2026-01-07 01:11:36.662842 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-07 01:11:36.662850 | orchestrator | Wednesday 07 January 2026 01:09:14 +0000 (0:00:01.537) 0:00:09.155 ***** 2026-01-07 01:11:36.662876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.662912 | orchestrator | 2026-01-07 01:11:36.662920 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-07 01:11:36.662928 | orchestrator | Wednesday 07 January 2026 01:09:15 +0000 (0:00:01.357) 0:00:10.512 ***** 2026-01-07 01:11:36.662936 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.662945 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.662954 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.662962 | orchestrator | 2026-01-07 01:11:36.662972 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-07 01:11:36.662981 | orchestrator | Wednesday 07 January 2026 01:09:16 +0000 (0:00:00.568) 0:00:11.080 ***** 2026-01-07 01:11:36.662990 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:11:36.662999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:11:36.663009 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:11:36.663017 | orchestrator | 2026-01-07 01:11:36.663026 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-07 01:11:36.663032 | orchestrator | Wednesday 07 January 2026 01:09:17 +0000 (0:00:01.344) 0:00:12.424 ***** 2026-01-07 01:11:36.663043 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:11:36.663049 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:11:36.663055 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:11:36.663061 | orchestrator | 2026-01-07 01:11:36.663067 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-07 01:11:36.663073 | orchestrator | Wednesday 07 January 2026 01:09:18 +0000 (0:00:01.179) 0:00:13.604 ***** 2026-01-07 01:11:36.663092 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:11:36.663098 | orchestrator | 2026-01-07 01:11:36.663104 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-07 01:11:36.663110 | orchestrator | Wednesday 07 January 2026 01:09:19 +0000 (0:00:00.797) 0:00:14.401 ***** 2026-01-07 01:11:36.663116 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-07 01:11:36.663122 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-07 01:11:36.663128 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:36.663133 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:11:36.663138 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:11:36.663143 | orchestrator | 2026-01-07 01:11:36.663152 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-07 01:11:36.663161 | orchestrator | Wednesday 07 January 2026 01:09:20 +0000 (0:00:00.923) 0:00:15.324 ***** 2026-01-07 01:11:36.663169 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.663176 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.663184 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.663191 | orchestrator | 2026-01-07 01:11:36.663199 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-07 01:11:36.663211 | orchestrator | Wednesday 07 January 2026 01:09:21 +0000 (0:00:00.739) 0:00:16.064 ***** 2026-01-07 01:11:36.663246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097291, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.585876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097291, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.585876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097291, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.585876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097403, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6043134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097403, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6043134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097403, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6043134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097349, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5902429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097349, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5902429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097349, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5902429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097405, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.606993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097405, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.606993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097405, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.606993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097375, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5956666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097375, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5956666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097375, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5956666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097396, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6015186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097396, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6015186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097396, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6015186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097289, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5625181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097289, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5625181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097289, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5625181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097339, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5870166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097339, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5870166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097339, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5870166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097353, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5905976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097353, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5905976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097353, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5905976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097385, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5977914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097385, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5977914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097385, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5977914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097402, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6039999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097402, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6039999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097402, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6039999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097344, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5888605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097344, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5888605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097344, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5888605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097393, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6004326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097393, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6004326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097393, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6004326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097379, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5965185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097379, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5965185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097369, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.59491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097379, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5965185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097369, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.59491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097361, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097369, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.59491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097361, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097388, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5997522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097361, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097388, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5997522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097356, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097388, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.5997522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097356, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097400, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6025188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097356, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.591443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097400, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6025188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097517, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6541383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097400, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6025188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097517, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6541383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097425, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6215189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097517, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6541383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097425, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6215189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097420, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6124256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097425, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6215189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097420, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6124256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097444, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097420, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6124256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097444, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097412, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.607817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097444, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097412, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.607817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097477, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.637845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097412, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.607817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097477, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.637845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097445, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6336508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097477, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.637845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097445, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6336508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097484, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.639417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097445, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6336508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097484, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.639417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097508, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6486998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097508, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6486998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.663998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097484, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.639417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097475, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6366608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097475, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6366608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097440, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6243532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097508, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6486998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097440, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6243532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097475, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6366608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097424, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6155188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097424, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6155188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097436, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6238756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097440, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6243532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097436, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6238756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097422, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.614519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097424, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6155188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097422, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.614519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097441, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097436, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6238756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097441, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097496, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6476016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097422, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.614519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097496, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6476016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097493, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6410692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097441, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6256075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097493, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6410692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097414, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6084588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097496, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6476016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097414, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6084588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097417, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6108387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097493, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6410692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097417, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6108387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097465, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.635045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097414, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6084588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097465, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.635045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097491, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6402788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097417, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6108387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097491, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6402788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097465, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.635045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097491, 'dev': 117, 'nlink': 1, 'atime': 1767744158.0, 'mtime': 1767744158.0, 'ctime': 1767745137.6402788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:11:36.664325 | orchestrator | 2026-01-07 01:11:36.664331 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-07 01:11:36.664337 | orchestrator | Wednesday 07 January 2026 01:09:59 +0000 (0:00:37.843) 0:00:53.908 ***** 2026-01-07 01:11:36.664344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.664359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.664368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:11:36.664376 | orchestrator | 2026-01-07 01:11:36.664384 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-07 01:11:36.664392 | orchestrator | Wednesday 07 January 2026 01:10:00 +0000 (0:00:01.280) 0:00:55.188 ***** 2026-01-07 01:11:36.664404 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:36.664413 | orchestrator | 2026-01-07 01:11:36.664422 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-07 01:11:36.664430 | orchestrator | Wednesday 07 January 2026 01:10:02 +0000 (0:00:02.330) 0:00:57.519 ***** 2026-01-07 01:11:36.664438 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:36.664447 | orchestrator | 2026-01-07 01:11:36.664456 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:11:36.664463 | orchestrator | Wednesday 07 January 2026 01:10:05 +0000 (0:00:02.619) 0:01:00.138 ***** 2026-01-07 01:11:36.664468 | orchestrator | 2026-01-07 01:11:36.664474 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:11:36.664482 | orchestrator | Wednesday 07 January 2026 01:10:05 +0000 (0:00:00.071) 0:01:00.210 ***** 2026-01-07 01:11:36.664487 | orchestrator | 2026-01-07 01:11:36.664493 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:11:36.664498 | orchestrator | Wednesday 07 January 2026 01:10:05 +0000 (0:00:00.063) 0:01:00.274 ***** 2026-01-07 01:11:36.664503 | orchestrator | 2026-01-07 01:11:36.664508 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-07 01:11:36.664513 | orchestrator | Wednesday 07 January 2026 01:10:05 +0000 (0:00:00.246) 0:01:00.520 ***** 2026-01-07 01:11:36.664518 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.664523 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.664528 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:36.664534 | orchestrator | 2026-01-07 01:11:36.664539 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-07 01:11:36.664544 | orchestrator | Wednesday 07 January 2026 01:10:07 +0000 (0:00:02.192) 0:01:02.713 ***** 2026-01-07 01:11:36.664549 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.664554 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.664559 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-07 01:11:36.664570 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-07 01:11:36.664576 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-07 01:11:36.664581 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-07 01:11:36.664586 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:36.664591 | orchestrator | 2026-01-07 01:11:36.664596 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-07 01:11:36.664601 | orchestrator | Wednesday 07 January 2026 01:10:57 +0000 (0:00:50.052) 0:01:52.766 ***** 2026-01-07 01:11:36.664607 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.664612 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:36.664617 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:36.664622 | orchestrator | 2026-01-07 01:11:36.664627 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-07 01:11:36.664632 | orchestrator | Wednesday 07 January 2026 01:11:28 +0000 (0:00:30.604) 0:02:23.370 ***** 2026-01-07 01:11:36.664637 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:36.664642 | orchestrator | 2026-01-07 01:11:36.664647 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-07 01:11:36.664652 | orchestrator | Wednesday 07 January 2026 01:11:30 +0000 (0:00:01.967) 0:02:25.337 ***** 2026-01-07 01:11:36.664657 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.664663 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:36.664668 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:36.664673 | orchestrator | 2026-01-07 01:11:36.664678 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-07 01:11:36.664683 | orchestrator | Wednesday 07 January 2026 01:11:31 +0000 (0:00:00.487) 0:02:25.825 ***** 2026-01-07 01:11:36.664689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-07 01:11:36.664696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-07 01:11:36.664701 | orchestrator | 2026-01-07 01:11:36.664707 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-07 01:11:36.664712 | orchestrator | Wednesday 07 January 2026 01:11:33 +0000 (0:00:02.237) 0:02:28.063 ***** 2026-01-07 01:11:36.664717 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:36.664722 | orchestrator | 2026-01-07 01:11:36.664727 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:11:36.664734 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:11:36.664739 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:11:36.664744 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:11:36.664749 | orchestrator | 2026-01-07 01:11:36.664755 | orchestrator | 2026-01-07 01:11:36.664762 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:11:36.664768 | orchestrator | Wednesday 07 January 2026 01:11:33 +0000 (0:00:00.248) 0:02:28.311 ***** 2026-01-07 01:11:36.664773 | orchestrator | =============================================================================== 2026-01-07 01:11:36.664782 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.05s 2026-01-07 01:11:36.664787 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.84s 2026-01-07 01:11:36.664793 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.60s 2026-01-07 01:11:36.664798 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.62s 2026-01-07 01:11:36.664806 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2026-01-07 01:11:36.664811 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.24s 2026-01-07 01:11:36.664816 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.19s 2026-01-07 01:11:36.664821 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.97s 2026-01-07 01:11:36.664826 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.68s 2026-01-07 01:11:36.664832 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.54s 2026-01-07 01:11:36.664837 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.36s 2026-01-07 01:11:36.664842 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2026-01-07 01:11:36.664847 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.28s 2026-01-07 01:11:36.664852 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.18s 2026-01-07 01:11:36.664857 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.03s 2026-01-07 01:11:36.665010 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2026-01-07 01:11:36.665026 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.92s 2026-01-07 01:11:36.665031 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.83s 2026-01-07 01:11:36.665037 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-01-07 01:11:36.665042 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.74s 2026-01-07 01:11:36.665047 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:36.665052 | orchestrator | 2026-01-07 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:39.706396 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:39.707983 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state STARTED 2026-01-07 01:11:39.709669 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:39.710112 | orchestrator | 2026-01-07 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:42.775535 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:42.779469 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task edaf59ec-9b4c-4a3c-bb0c-864660af89db is in state SUCCESS 2026-01-07 01:11:42.781405 | orchestrator | 2026-01-07 01:11:42.781459 | orchestrator | 2026-01-07 01:11:42.781465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:11:42.781470 | orchestrator | 2026-01-07 01:11:42.781473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:11:42.781477 | orchestrator | Wednesday 07 January 2026 01:08:51 +0000 (0:00:00.367) 0:00:00.367 ***** 2026-01-07 01:11:42.781480 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:11:42.781484 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:11:42.781487 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:11:42.781490 | orchestrator | 2026-01-07 01:11:42.781494 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:11:42.781508 | orchestrator | Wednesday 07 January 2026 01:08:51 +0000 (0:00:00.293) 0:00:00.661 ***** 2026-01-07 01:11:42.781511 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-07 01:11:42.781515 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-07 01:11:42.781519 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-07 01:11:42.781522 | orchestrator | 2026-01-07 01:11:42.781525 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-07 01:11:42.781528 | orchestrator | 2026-01-07 01:11:42.781531 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:11:42.781535 | orchestrator | Wednesday 07 January 2026 01:08:51 +0000 (0:00:00.339) 0:00:01.000 ***** 2026-01-07 01:11:42.781539 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:42.781545 | orchestrator | 2026-01-07 01:11:42.781550 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-07 01:11:42.781555 | orchestrator | Wednesday 07 January 2026 01:08:52 +0000 (0:00:00.561) 0:00:01.561 ***** 2026-01-07 01:11:42.781580 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-07 01:11:42.781586 | orchestrator | 2026-01-07 01:11:42.781591 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-07 01:11:42.781604 | orchestrator | Wednesday 07 January 2026 01:08:55 +0000 (0:00:03.330) 0:00:04.892 ***** 2026-01-07 01:11:42.781611 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-07 01:11:42.781620 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-07 01:11:42.781623 | orchestrator | 2026-01-07 01:11:42.781626 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-07 01:11:42.781629 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:05.866) 0:00:10.758 ***** 2026-01-07 01:11:42.781633 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:11:42.781636 | orchestrator | 2026-01-07 01:11:42.781639 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-07 01:11:42.781642 | orchestrator | Wednesday 07 January 2026 01:09:04 +0000 (0:00:03.113) 0:00:13.871 ***** 2026-01-07 01:11:42.781645 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:11:42.781648 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-07 01:11:42.781651 | orchestrator | 2026-01-07 01:11:42.781655 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-07 01:11:42.781658 | orchestrator | Wednesday 07 January 2026 01:09:08 +0000 (0:00:03.410) 0:00:17.282 ***** 2026-01-07 01:11:42.781661 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:11:42.781664 | orchestrator | 2026-01-07 01:11:42.781668 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-07 01:11:42.781673 | orchestrator | Wednesday 07 January 2026 01:09:12 +0000 (0:00:04.084) 0:00:21.366 ***** 2026-01-07 01:11:42.781678 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-07 01:11:42.781684 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-07 01:11:42.781689 | orchestrator | 2026-01-07 01:11:42.781695 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-07 01:11:42.781701 | orchestrator | Wednesday 07 January 2026 01:09:20 +0000 (0:00:07.772) 0:00:29.139 ***** 2026-01-07 01:11:42.781708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.781730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.781737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.781743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.781801 | orchestrator | 2026-01-07 01:11:42.781805 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:11:42.781810 | orchestrator | Wednesday 07 January 2026 01:09:22 +0000 (0:00:02.481) 0:00:31.621 ***** 2026-01-07 01:11:42.781816 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.781822 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.781827 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.781832 | orchestrator | 2026-01-07 01:11:42.781838 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:11:42.781843 | orchestrator | Wednesday 07 January 2026 01:09:22 +0000 (0:00:00.410) 0:00:32.031 ***** 2026-01-07 01:11:42.781848 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:42.781854 | orchestrator | 2026-01-07 01:11:42.781859 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-07 01:11:42.781864 | orchestrator | Wednesday 07 January 2026 01:09:23 +0000 (0:00:00.866) 0:00:32.897 ***** 2026-01-07 01:11:42.781875 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:11:42.781881 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:11:42.781918 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:11:42.781923 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:11:42.781928 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:11:42.781933 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:11:42.781937 | orchestrator | 2026-01-07 01:11:42.781941 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-07 01:11:42.781944 | orchestrator | Wednesday 07 January 2026 01:09:25 +0000 (0:00:01.688) 0:00:34.585 ***** 2026-01-07 01:11:42.781951 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.781956 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.781961 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.781972 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.782001 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.782006 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:11:42.782037 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782047 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782058 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782079 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782085 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782092 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:11:42.782096 | orchestrator | 2026-01-07 01:11:42.782100 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-07 01:11:42.782104 | orchestrator | Wednesday 07 January 2026 01:09:29 +0000 (0:00:04.211) 0:00:38.797 ***** 2026-01-07 01:11:42.782108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:42.782116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:42.782122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:11:42.782127 | orchestrator | 2026-01-07 01:11:42.782132 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-07 01:11:42.782138 | orchestrator | Wednesday 07 January 2026 01:09:31 +0000 (0:00:02.204) 0:00:41.001 ***** 2026-01-07 01:11:42.782144 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-07 01:11:42.782149 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-07 01:11:42.782154 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-07 01:11:42.782158 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:11:42.782162 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:11:42.782165 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:11:42.782169 | orchestrator | 2026-01-07 01:11:42.782172 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:11:42.782176 | orchestrator | Wednesday 07 January 2026 01:09:35 +0000 (0:00:03.808) 0:00:44.810 ***** 2026-01-07 01:11:42.782179 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:11:42.782183 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:11:42.782187 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:11:42.782190 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:11:42.782194 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:11:42.782199 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:11:42.782204 | orchestrator | 2026-01-07 01:11:42.782209 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-07 01:11:42.782214 | orchestrator | Wednesday 07 January 2026 01:09:37 +0000 (0:00:01.409) 0:00:46.220 ***** 2026-01-07 01:11:42.782219 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782224 | orchestrator | 2026-01-07 01:11:42.782229 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-07 01:11:42.782235 | orchestrator | Wednesday 07 January 2026 01:09:37 +0000 (0:00:00.125) 0:00:46.345 ***** 2026-01-07 01:11:42.782241 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782247 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782251 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782255 | orchestrator | 2026-01-07 01:11:42.782258 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:11:42.782261 | orchestrator | Wednesday 07 January 2026 01:09:37 +0000 (0:00:00.334) 0:00:46.679 ***** 2026-01-07 01:11:42.782265 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:11:42.782269 | orchestrator | 2026-01-07 01:11:42.782273 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-07 01:11:42.782287 | orchestrator | Wednesday 07 January 2026 01:09:38 +0000 (0:00:00.943) 0:00:47.622 ***** 2026-01-07 01:11:42.782292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782378 | orchestrator | 2026-01-07 01:11:42.782382 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-07 01:11:42.782385 | orchestrator | Wednesday 07 January 2026 01:09:43 +0000 (0:00:04.512) 0:00:52.135 ***** 2026-01-07 01:11:42.782391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782404 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782428 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782449 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782452 | orchestrator | 2026-01-07 01:11:42.782455 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-07 01:11:42.782458 | orchestrator | Wednesday 07 January 2026 01:09:44 +0000 (0:00:01.077) 0:00:53.212 ***** 2026-01-07 01:11:42.782463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782481 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782499 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782521 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782524 | orchestrator | 2026-01-07 01:11:42.782527 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-07 01:11:42.782530 | orchestrator | Wednesday 07 January 2026 01:09:45 +0000 (0:00:01.804) 0:00:55.016 ***** 2026-01-07 01:11:42.782534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782585 | orchestrator | 2026-01-07 01:11:42.782588 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-07 01:11:42.782591 | orchestrator | Wednesday 07 January 2026 01:09:49 +0000 (0:00:03.900) 0:00:58.917 ***** 2026-01-07 01:11:42.782595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:11:42.782598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:11:42.782603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:11:42.782606 | orchestrator | 2026-01-07 01:11:42.782609 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-07 01:11:42.782612 | orchestrator | Wednesday 07 January 2026 01:09:51 +0000 (0:00:01.613) 0:01:00.530 ***** 2026-01-07 01:11:42.782618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782667 | orchestrator | 2026-01-07 01:11:42.782670 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-07 01:11:42.782673 | orchestrator | Wednesday 07 January 2026 01:10:03 +0000 (0:00:12.462) 0:01:12.993 ***** 2026-01-07 01:11:42.782677 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.782683 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:42.782688 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:42.782693 | orchestrator | 2026-01-07 01:11:42.782699 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-07 01:11:42.782707 | orchestrator | Wednesday 07 January 2026 01:10:05 +0000 (0:00:01.830) 0:01:14.823 ***** 2026-01-07 01:11:42.782713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782741 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782775 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:11:42.782784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:11:42.782796 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782800 | orchestrator | 2026-01-07 01:11:42.782803 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-07 01:11:42.782806 | orchestrator | Wednesday 07 January 2026 01:10:06 +0000 (0:00:00.798) 0:01:15.621 ***** 2026-01-07 01:11:42.782809 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782812 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782815 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782819 | orchestrator | 2026-01-07 01:11:42.782822 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-07 01:11:42.782827 | orchestrator | Wednesday 07 January 2026 01:10:06 +0000 (0:00:00.319) 0:01:15.941 ***** 2026-01-07 01:11:42.782834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:11:42.782858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:11:42.782922 | orchestrator | 2026-01-07 01:11:42.782927 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:11:42.782932 | orchestrator | Wednesday 07 January 2026 01:10:10 +0000 (0:00:03.375) 0:01:19.317 ***** 2026-01-07 01:11:42.782937 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.782942 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:11:42.782947 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:11:42.782951 | orchestrator | 2026-01-07 01:11:42.782956 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-07 01:11:42.782960 | orchestrator | Wednesday 07 January 2026 01:10:10 +0000 (0:00:00.541) 0:01:19.859 ***** 2026-01-07 01:11:42.782965 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.782970 | orchestrator | 2026-01-07 01:11:42.782974 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-07 01:11:42.782979 | orchestrator | Wednesday 07 January 2026 01:10:12 +0000 (0:00:02.220) 0:01:22.079 ***** 2026-01-07 01:11:42.782983 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.782989 | orchestrator | 2026-01-07 01:11:42.782994 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-07 01:11:42.783000 | orchestrator | Wednesday 07 January 2026 01:10:15 +0000 (0:00:02.368) 0:01:24.448 ***** 2026-01-07 01:11:42.783005 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.783010 | orchestrator | 2026-01-07 01:11:42.783015 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:11:42.783019 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:22.427) 0:01:46.876 ***** 2026-01-07 01:11:42.783023 | orchestrator | 2026-01-07 01:11:42.783028 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:11:42.783033 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:00.068) 0:01:46.944 ***** 2026-01-07 01:11:42.783037 | orchestrator | 2026-01-07 01:11:42.783042 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:11:42.783047 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:00.072) 0:01:47.017 ***** 2026-01-07 01:11:42.783052 | orchestrator | 2026-01-07 01:11:42.783057 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-07 01:11:42.783063 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:00.067) 0:01:47.084 ***** 2026-01-07 01:11:42.783067 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.783072 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:42.783078 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:42.783084 | orchestrator | 2026-01-07 01:11:42.783089 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-07 01:11:42.783094 | orchestrator | Wednesday 07 January 2026 01:10:56 +0000 (0:00:18.869) 0:02:05.954 ***** 2026-01-07 01:11:42.783099 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.783104 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:42.783109 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:42.783114 | orchestrator | 2026-01-07 01:11:42.783120 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-07 01:11:42.783124 | orchestrator | Wednesday 07 January 2026 01:11:05 +0000 (0:00:08.346) 0:02:14.300 ***** 2026-01-07 01:11:42.783129 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.783134 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:42.783139 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:42.783143 | orchestrator | 2026-01-07 01:11:42.783148 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-07 01:11:42.783154 | orchestrator | Wednesday 07 January 2026 01:11:30 +0000 (0:00:24.964) 0:02:39.264 ***** 2026-01-07 01:11:42.783164 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:11:42.783169 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:11:42.783174 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:11:42.783179 | orchestrator | 2026-01-07 01:11:42.783184 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-07 01:11:42.783194 | orchestrator | Wednesday 07 January 2026 01:11:40 +0000 (0:00:10.287) 0:02:49.552 ***** 2026-01-07 01:11:42.783200 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:11:42.783205 | orchestrator | 2026-01-07 01:11:42.783210 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:11:42.783215 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 01:11:42.783221 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:11:42.783226 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:11:42.783231 | orchestrator | 2026-01-07 01:11:42.783236 | orchestrator | 2026-01-07 01:11:42.783241 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:11:42.783246 | orchestrator | Wednesday 07 January 2026 01:11:40 +0000 (0:00:00.285) 0:02:49.837 ***** 2026-01-07 01:11:42.783252 | orchestrator | =============================================================================== 2026-01-07 01:11:42.783257 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.96s 2026-01-07 01:11:42.783263 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.43s 2026-01-07 01:11:42.783268 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.87s 2026-01-07 01:11:42.783273 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.46s 2026-01-07 01:11:42.783278 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.29s 2026-01-07 01:11:42.783284 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.35s 2026-01-07 01:11:42.783292 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.77s 2026-01-07 01:11:42.783298 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.87s 2026-01-07 01:11:42.783303 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.51s 2026-01-07 01:11:42.783308 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.21s 2026-01-07 01:11:42.783313 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.08s 2026-01-07 01:11:42.783318 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.90s 2026-01-07 01:11:42.783323 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.81s 2026-01-07 01:11:42.783328 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.41s 2026-01-07 01:11:42.783333 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.38s 2026-01-07 01:11:42.783338 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.33s 2026-01-07 01:11:42.783342 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.11s 2026-01-07 01:11:42.783347 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.48s 2026-01-07 01:11:42.783352 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.37s 2026-01-07 01:11:42.783357 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.22s 2026-01-07 01:11:42.783362 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:42.783367 | orchestrator | 2026-01-07 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:45.831603 | orchestrator | 2026-01-07 01:11:45 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:45.833473 | orchestrator | 2026-01-07 01:11:45 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:45.833521 | orchestrator | 2026-01-07 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:48.879719 | orchestrator | 2026-01-07 01:11:48 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:48.881359 | orchestrator | 2026-01-07 01:11:48 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:48.881721 | orchestrator | 2026-01-07 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:51.923180 | orchestrator | 2026-01-07 01:11:51 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:51.924781 | orchestrator | 2026-01-07 01:11:51 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:51.924827 | orchestrator | 2026-01-07 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:54.974381 | orchestrator | 2026-01-07 01:11:54 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:54.976298 | orchestrator | 2026-01-07 01:11:54 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:54.976512 | orchestrator | 2026-01-07 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:58.032771 | orchestrator | 2026-01-07 01:11:58 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:11:58.035340 | orchestrator | 2026-01-07 01:11:58 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:11:58.035419 | orchestrator | 2026-01-07 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:01.067057 | orchestrator | 2026-01-07 01:12:01 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:01.069338 | orchestrator | 2026-01-07 01:12:01 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:01.069425 | orchestrator | 2026-01-07 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:04.104003 | orchestrator | 2026-01-07 01:12:04 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:04.105572 | orchestrator | 2026-01-07 01:12:04 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:04.105622 | orchestrator | 2026-01-07 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:07.156746 | orchestrator | 2026-01-07 01:12:07 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:07.159734 | orchestrator | 2026-01-07 01:12:07 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:07.159855 | orchestrator | 2026-01-07 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:10.190472 | orchestrator | 2026-01-07 01:12:10 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:10.191480 | orchestrator | 2026-01-07 01:12:10 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:10.191522 | orchestrator | 2026-01-07 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:13.236144 | orchestrator | 2026-01-07 01:12:13 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:13.236305 | orchestrator | 2026-01-07 01:12:13 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:13.236430 | orchestrator | 2026-01-07 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:16.276919 | orchestrator | 2026-01-07 01:12:16 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:16.277493 | orchestrator | 2026-01-07 01:12:16 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:16.277528 | orchestrator | 2026-01-07 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:19.322523 | orchestrator | 2026-01-07 01:12:19 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:19.324744 | orchestrator | 2026-01-07 01:12:19 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:19.325057 | orchestrator | 2026-01-07 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:22.380328 | orchestrator | 2026-01-07 01:12:22 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:22.381233 | orchestrator | 2026-01-07 01:12:22 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:22.381267 | orchestrator | 2026-01-07 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:25.427520 | orchestrator | 2026-01-07 01:12:25 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:25.429526 | orchestrator | 2026-01-07 01:12:25 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:25.429569 | orchestrator | 2026-01-07 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:28.483501 | orchestrator | 2026-01-07 01:12:28 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:28.486337 | orchestrator | 2026-01-07 01:12:28 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:28.486592 | orchestrator | 2026-01-07 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:31.535744 | orchestrator | 2026-01-07 01:12:31 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:31.538322 | orchestrator | 2026-01-07 01:12:31 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:31.538486 | orchestrator | 2026-01-07 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:34.589976 | orchestrator | 2026-01-07 01:12:34 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:34.592505 | orchestrator | 2026-01-07 01:12:34 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:34.592556 | orchestrator | 2026-01-07 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:37.640440 | orchestrator | 2026-01-07 01:12:37 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:37.642669 | orchestrator | 2026-01-07 01:12:37 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:37.642724 | orchestrator | 2026-01-07 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:40.690980 | orchestrator | 2026-01-07 01:12:40 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:40.692824 | orchestrator | 2026-01-07 01:12:40 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:40.692868 | orchestrator | 2026-01-07 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:43.736381 | orchestrator | 2026-01-07 01:12:43 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:43.738454 | orchestrator | 2026-01-07 01:12:43 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:43.738507 | orchestrator | 2026-01-07 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:46.786841 | orchestrator | 2026-01-07 01:12:46 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:46.787831 | orchestrator | 2026-01-07 01:12:46 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:46.787863 | orchestrator | 2026-01-07 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:49.832743 | orchestrator | 2026-01-07 01:12:49 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state STARTED 2026-01-07 01:12:49.834397 | orchestrator | 2026-01-07 01:12:49 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:49.834466 | orchestrator | 2026-01-07 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:52.879007 | orchestrator | 2026-01-07 01:12:52 | INFO  | Task fcae8e0d-9041-40f9-bb02-7161c5b155ae is in state SUCCESS 2026-01-07 01:12:52.879062 | orchestrator | 2026-01-07 01:12:52 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:12:52.882109 | orchestrator | 2026-01-07 01:12:52 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:52.882184 | orchestrator | 2026-01-07 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:55.928118 | orchestrator | 2026-01-07 01:12:55 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:12:55.928257 | orchestrator | 2026-01-07 01:12:55 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:55.928377 | orchestrator | 2026-01-07 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:58.987571 | orchestrator | 2026-01-07 01:12:58 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:12:58.990505 | orchestrator | 2026-01-07 01:12:58 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:12:58.990569 | orchestrator | 2026-01-07 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:02.035268 | orchestrator | 2026-01-07 01:13:02 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:02.035468 | orchestrator | 2026-01-07 01:13:02 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:02.035488 | orchestrator | 2026-01-07 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:05.085630 | orchestrator | 2026-01-07 01:13:05 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:05.087939 | orchestrator | 2026-01-07 01:13:05 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:05.087996 | orchestrator | 2026-01-07 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:08.128604 | orchestrator | 2026-01-07 01:13:08 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:08.130709 | orchestrator | 2026-01-07 01:13:08 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:08.130748 | orchestrator | 2026-01-07 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:11.179423 | orchestrator | 2026-01-07 01:13:11 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:11.182005 | orchestrator | 2026-01-07 01:13:11 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:11.182110 | orchestrator | 2026-01-07 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:14.224747 | orchestrator | 2026-01-07 01:13:14 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:14.224802 | orchestrator | 2026-01-07 01:13:14 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:14.224809 | orchestrator | 2026-01-07 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:17.272729 | orchestrator | 2026-01-07 01:13:17 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:17.274865 | orchestrator | 2026-01-07 01:13:17 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:17.274908 | orchestrator | 2026-01-07 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:20.327186 | orchestrator | 2026-01-07 01:13:20 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:20.328608 | orchestrator | 2026-01-07 01:13:20 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:20.328635 | orchestrator | 2026-01-07 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:23.375408 | orchestrator | 2026-01-07 01:13:23 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:23.376495 | orchestrator | 2026-01-07 01:13:23 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:23.376524 | orchestrator | 2026-01-07 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:26.428288 | orchestrator | 2026-01-07 01:13:26 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:26.430620 | orchestrator | 2026-01-07 01:13:26 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:26.430681 | orchestrator | 2026-01-07 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:29.469656 | orchestrator | 2026-01-07 01:13:29 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:29.470776 | orchestrator | 2026-01-07 01:13:29 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:29.470814 | orchestrator | 2026-01-07 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:32.520456 | orchestrator | 2026-01-07 01:13:32 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:32.521199 | orchestrator | 2026-01-07 01:13:32 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:32.521279 | orchestrator | 2026-01-07 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:35.557899 | orchestrator | 2026-01-07 01:13:35 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:35.559937 | orchestrator | 2026-01-07 01:13:35 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:35.559994 | orchestrator | 2026-01-07 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:38.589602 | orchestrator | 2026-01-07 01:13:38 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:38.590165 | orchestrator | 2026-01-07 01:13:38 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:38.590218 | orchestrator | 2026-01-07 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:41.635362 | orchestrator | 2026-01-07 01:13:41 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:41.636942 | orchestrator | 2026-01-07 01:13:41 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:41.637015 | orchestrator | 2026-01-07 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:44.684149 | orchestrator | 2026-01-07 01:13:44 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:44.684551 | orchestrator | 2026-01-07 01:13:44 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:44.684585 | orchestrator | 2026-01-07 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:47.714685 | orchestrator | 2026-01-07 01:13:47 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:47.718258 | orchestrator | 2026-01-07 01:13:47 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:47.718377 | orchestrator | 2026-01-07 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:50.751587 | orchestrator | 2026-01-07 01:13:50 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:50.752151 | orchestrator | 2026-01-07 01:13:50 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:50.752252 | orchestrator | 2026-01-07 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:53.799945 | orchestrator | 2026-01-07 01:13:53 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:53.801329 | orchestrator | 2026-01-07 01:13:53 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:53.802214 | orchestrator | 2026-01-07 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:56.845612 | orchestrator | 2026-01-07 01:13:56 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:56.847892 | orchestrator | 2026-01-07 01:13:56 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:56.847946 | orchestrator | 2026-01-07 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:59.888934 | orchestrator | 2026-01-07 01:13:59 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:13:59.891012 | orchestrator | 2026-01-07 01:13:59 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:13:59.891094 | orchestrator | 2026-01-07 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:02.949484 | orchestrator | 2026-01-07 01:14:02 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:02.950146 | orchestrator | 2026-01-07 01:14:02 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:02.950224 | orchestrator | 2026-01-07 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:05.991949 | orchestrator | 2026-01-07 01:14:05 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:05.994404 | orchestrator | 2026-01-07 01:14:05 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:05.994447 | orchestrator | 2026-01-07 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:09.054176 | orchestrator | 2026-01-07 01:14:09 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:09.055842 | orchestrator | 2026-01-07 01:14:09 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:09.055883 | orchestrator | 2026-01-07 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:12.090242 | orchestrator | 2026-01-07 01:14:12 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:12.091234 | orchestrator | 2026-01-07 01:14:12 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:12.091683 | orchestrator | 2026-01-07 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:15.135825 | orchestrator | 2026-01-07 01:14:15 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:15.137707 | orchestrator | 2026-01-07 01:14:15 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:15.137754 | orchestrator | 2026-01-07 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:18.188742 | orchestrator | 2026-01-07 01:14:18 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:18.190233 | orchestrator | 2026-01-07 01:14:18 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:18.190553 | orchestrator | 2026-01-07 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:21.236581 | orchestrator | 2026-01-07 01:14:21 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:21.238745 | orchestrator | 2026-01-07 01:14:21 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:21.238833 | orchestrator | 2026-01-07 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:24.287752 | orchestrator | 2026-01-07 01:14:24 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:24.290757 | orchestrator | 2026-01-07 01:14:24 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:24.290813 | orchestrator | 2026-01-07 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:27.335602 | orchestrator | 2026-01-07 01:14:27 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:27.336787 | orchestrator | 2026-01-07 01:14:27 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:27.336832 | orchestrator | 2026-01-07 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:30.377034 | orchestrator | 2026-01-07 01:14:30 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:30.381115 | orchestrator | 2026-01-07 01:14:30 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:30.381181 | orchestrator | 2026-01-07 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:33.431940 | orchestrator | 2026-01-07 01:14:33 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:33.432078 | orchestrator | 2026-01-07 01:14:33 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:33.432094 | orchestrator | 2026-01-07 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:36.480364 | orchestrator | 2026-01-07 01:14:36 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:36.483102 | orchestrator | 2026-01-07 01:14:36 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:36.483184 | orchestrator | 2026-01-07 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:39.527942 | orchestrator | 2026-01-07 01:14:39 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:39.533324 | orchestrator | 2026-01-07 01:14:39 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:39.533379 | orchestrator | 2026-01-07 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:42.573530 | orchestrator | 2026-01-07 01:14:42 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:42.575698 | orchestrator | 2026-01-07 01:14:42 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:42.576123 | orchestrator | 2026-01-07 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:45.617658 | orchestrator | 2026-01-07 01:14:45 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:45.620034 | orchestrator | 2026-01-07 01:14:45 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:45.620077 | orchestrator | 2026-01-07 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:48.668631 | orchestrator | 2026-01-07 01:14:48 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:48.670288 | orchestrator | 2026-01-07 01:14:48 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:48.670335 | orchestrator | 2026-01-07 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:51.717294 | orchestrator | 2026-01-07 01:14:51 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:51.717343 | orchestrator | 2026-01-07 01:14:51 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:51.717348 | orchestrator | 2026-01-07 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:54.751866 | orchestrator | 2026-01-07 01:14:54 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:54.753676 | orchestrator | 2026-01-07 01:14:54 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:54.753722 | orchestrator | 2026-01-07 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:57.803921 | orchestrator | 2026-01-07 01:14:57 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:14:57.805867 | orchestrator | 2026-01-07 01:14:57 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:14:57.805915 | orchestrator | 2026-01-07 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:00.856990 | orchestrator | 2026-01-07 01:15:00 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:00.858248 | orchestrator | 2026-01-07 01:15:00 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:00.858509 | orchestrator | 2026-01-07 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:03.907399 | orchestrator | 2026-01-07 01:15:03 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:03.909681 | orchestrator | 2026-01-07 01:15:03 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:03.909730 | orchestrator | 2026-01-07 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:06.950754 | orchestrator | 2026-01-07 01:15:06 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:06.951779 | orchestrator | 2026-01-07 01:15:06 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:06.951965 | orchestrator | 2026-01-07 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:10.006272 | orchestrator | 2026-01-07 01:15:10 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:10.007644 | orchestrator | 2026-01-07 01:15:10 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:10.007704 | orchestrator | 2026-01-07 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:13.062592 | orchestrator | 2026-01-07 01:15:13 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:13.064085 | orchestrator | 2026-01-07 01:15:13 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:13.064135 | orchestrator | 2026-01-07 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:16.107366 | orchestrator | 2026-01-07 01:15:16 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:16.109202 | orchestrator | 2026-01-07 01:15:16 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:16.109262 | orchestrator | 2026-01-07 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:19.157545 | orchestrator | 2026-01-07 01:15:19 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:19.158205 | orchestrator | 2026-01-07 01:15:19 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:19.158229 | orchestrator | 2026-01-07 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:22.199048 | orchestrator | 2026-01-07 01:15:22 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:22.199880 | orchestrator | 2026-01-07 01:15:22 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:22.199916 | orchestrator | 2026-01-07 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:25.231048 | orchestrator | 2026-01-07 01:15:25 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:25.231946 | orchestrator | 2026-01-07 01:15:25 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:25.231978 | orchestrator | 2026-01-07 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:28.269050 | orchestrator | 2026-01-07 01:15:28 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:28.270407 | orchestrator | 2026-01-07 01:15:28 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:28.270443 | orchestrator | 2026-01-07 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:31.318623 | orchestrator | 2026-01-07 01:15:31 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:31.321863 | orchestrator | 2026-01-07 01:15:31 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:31.322070 | orchestrator | 2026-01-07 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:34.373136 | orchestrator | 2026-01-07 01:15:34 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:34.373980 | orchestrator | 2026-01-07 01:15:34 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:34.374045 | orchestrator | 2026-01-07 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:37.423081 | orchestrator | 2026-01-07 01:15:37 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:37.426501 | orchestrator | 2026-01-07 01:15:37 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:37.426588 | orchestrator | 2026-01-07 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:40.472333 | orchestrator | 2026-01-07 01:15:40 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:40.473747 | orchestrator | 2026-01-07 01:15:40 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:40.473798 | orchestrator | 2026-01-07 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:43.518499 | orchestrator | 2026-01-07 01:15:43 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:43.520160 | orchestrator | 2026-01-07 01:15:43 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:43.521395 | orchestrator | 2026-01-07 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:46.566543 | orchestrator | 2026-01-07 01:15:46 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:46.567410 | orchestrator | 2026-01-07 01:15:46 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:46.567493 | orchestrator | 2026-01-07 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:49.610695 | orchestrator | 2026-01-07 01:15:49 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:49.610769 | orchestrator | 2026-01-07 01:15:49 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:49.610775 | orchestrator | 2026-01-07 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:52.667319 | orchestrator | 2026-01-07 01:15:52 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:52.667441 | orchestrator | 2026-01-07 01:15:52 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:52.667453 | orchestrator | 2026-01-07 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:55.703444 | orchestrator | 2026-01-07 01:15:55 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:55.704257 | orchestrator | 2026-01-07 01:15:55 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:55.704290 | orchestrator | 2026-01-07 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:58.794640 | orchestrator | 2026-01-07 01:15:58 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:15:58.797938 | orchestrator | 2026-01-07 01:15:58 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:15:58.798133 | orchestrator | 2026-01-07 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:01.839127 | orchestrator | 2026-01-07 01:16:01 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:01.841141 | orchestrator | 2026-01-07 01:16:01 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:01.841244 | orchestrator | 2026-01-07 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:04.886850 | orchestrator | 2026-01-07 01:16:04 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:04.888382 | orchestrator | 2026-01-07 01:16:04 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:04.888425 | orchestrator | 2026-01-07 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:07.930556 | orchestrator | 2026-01-07 01:16:07 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:07.932992 | orchestrator | 2026-01-07 01:16:07 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:07.933208 | orchestrator | 2026-01-07 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:10.984139 | orchestrator | 2026-01-07 01:16:10 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:10.986714 | orchestrator | 2026-01-07 01:16:10 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:10.986763 | orchestrator | 2026-01-07 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:14.033536 | orchestrator | 2026-01-07 01:16:14 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:14.036704 | orchestrator | 2026-01-07 01:16:14 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:14.037157 | orchestrator | 2026-01-07 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:17.090247 | orchestrator | 2026-01-07 01:16:17 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:17.091156 | orchestrator | 2026-01-07 01:16:17 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:17.091214 | orchestrator | 2026-01-07 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:20.121578 | orchestrator | 2026-01-07 01:16:20 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:20.122531 | orchestrator | 2026-01-07 01:16:20 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:20.122693 | orchestrator | 2026-01-07 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:23.152434 | orchestrator | 2026-01-07 01:16:23 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:23.152888 | orchestrator | 2026-01-07 01:16:23 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:23.152998 | orchestrator | 2026-01-07 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:26.182260 | orchestrator | 2026-01-07 01:16:26 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:26.182320 | orchestrator | 2026-01-07 01:16:26 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:26.182363 | orchestrator | 2026-01-07 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:29.232552 | orchestrator | 2026-01-07 01:16:29 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:29.234285 | orchestrator | 2026-01-07 01:16:29 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:29.234483 | orchestrator | 2026-01-07 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:32.280390 | orchestrator | 2026-01-07 01:16:32 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:32.284463 | orchestrator | 2026-01-07 01:16:32 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:32.284507 | orchestrator | 2026-01-07 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:35.331165 | orchestrator | 2026-01-07 01:16:35 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:35.333082 | orchestrator | 2026-01-07 01:16:35 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:35.333164 | orchestrator | 2026-01-07 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:38.370300 | orchestrator | 2026-01-07 01:16:38 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:38.372098 | orchestrator | 2026-01-07 01:16:38 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:38.372202 | orchestrator | 2026-01-07 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:41.423835 | orchestrator | 2026-01-07 01:16:41 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:41.426455 | orchestrator | 2026-01-07 01:16:41 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:41.426650 | orchestrator | 2026-01-07 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:44.470920 | orchestrator | 2026-01-07 01:16:44 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:44.472113 | orchestrator | 2026-01-07 01:16:44 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:44.472162 | orchestrator | 2026-01-07 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:47.515799 | orchestrator | 2026-01-07 01:16:47 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:47.517792 | orchestrator | 2026-01-07 01:16:47 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:47.517838 | orchestrator | 2026-01-07 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:50.549088 | orchestrator | 2026-01-07 01:16:50 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:50.550616 | orchestrator | 2026-01-07 01:16:50 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:50.550819 | orchestrator | 2026-01-07 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:53.601777 | orchestrator | 2026-01-07 01:16:53 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:53.603838 | orchestrator | 2026-01-07 01:16:53 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:53.603886 | orchestrator | 2026-01-07 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:56.649959 | orchestrator | 2026-01-07 01:16:56 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:56.652374 | orchestrator | 2026-01-07 01:16:56 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:56.652430 | orchestrator | 2026-01-07 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:59.697435 | orchestrator | 2026-01-07 01:16:59 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:16:59.700109 | orchestrator | 2026-01-07 01:16:59 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:16:59.700915 | orchestrator | 2026-01-07 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:02.756029 | orchestrator | 2026-01-07 01:17:02 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:02.757689 | orchestrator | 2026-01-07 01:17:02 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state STARTED 2026-01-07 01:17:02.758111 | orchestrator | 2026-01-07 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:05.815737 | orchestrator | 2026-01-07 01:17:05 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:05.820844 | orchestrator | 2026-01-07 01:17:05 | INFO  | Task 3c66b4fd-60d9-494f-ba40-907c5d49af7b is in state SUCCESS 2026-01-07 01:17:05.821976 | orchestrator | 2026-01-07 01:17:05.822002 | orchestrator | 2026-01-07 01:17:05.822006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:05.822010 | orchestrator | 2026-01-07 01:17:05.822046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:05.822049 | orchestrator | Wednesday 07 January 2026 01:11:22 +0000 (0:00:00.176) 0:00:00.176 ***** 2026-01-07 01:17:05.822053 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822057 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:05.822060 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:05.822063 | orchestrator | 2026-01-07 01:17:05.822067 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:05.822070 | orchestrator | Wednesday 07 January 2026 01:11:22 +0000 (0:00:00.306) 0:00:00.483 ***** 2026-01-07 01:17:05.822073 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:17:05.822076 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:17:05.822079 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:17:05.822083 | orchestrator | 2026-01-07 01:17:05.822086 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-07 01:17:05.822089 | orchestrator | 2026-01-07 01:17:05.822092 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-07 01:17:05.822095 | orchestrator | Wednesday 07 January 2026 01:11:23 +0000 (0:00:00.750) 0:00:01.233 ***** 2026-01-07 01:17:05.822098 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822101 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:05.822104 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:05.822108 | orchestrator | 2026-01-07 01:17:05.822111 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:05.822117 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:05.822124 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:05.822130 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:05.822135 | orchestrator | 2026-01-07 01:17:05.822140 | orchestrator | 2026-01-07 01:17:05.822144 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:05.822150 | orchestrator | Wednesday 07 January 2026 01:12:50 +0000 (0:01:26.715) 0:01:27.949 ***** 2026-01-07 01:17:05.822155 | orchestrator | =============================================================================== 2026-01-07 01:17:05.822160 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 86.72s 2026-01-07 01:17:05.822165 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-01-07 01:17:05.822169 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-07 01:17:05.822174 | orchestrator | 2026-01-07 01:17:05.822179 | orchestrator | 2026-01-07 01:17:05.822184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:05.822189 | orchestrator | 2026-01-07 01:17:05.822194 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-07 01:17:05.822201 | orchestrator | Wednesday 07 January 2026 01:08:54 +0000 (0:00:00.246) 0:00:00.246 ***** 2026-01-07 01:17:05.822205 | orchestrator | changed: [testbed-manager] 2026-01-07 01:17:05.822213 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822216 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.822219 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.822222 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.822225 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.822229 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.822232 | orchestrator | 2026-01-07 01:17:05.822235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:05.822238 | orchestrator | Wednesday 07 January 2026 01:08:55 +0000 (0:00:00.745) 0:00:00.992 ***** 2026-01-07 01:17:05.822241 | orchestrator | changed: [testbed-manager] 2026-01-07 01:17:05.822244 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822251 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.822255 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.822258 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.822261 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.822264 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.822267 | orchestrator | 2026-01-07 01:17:05.822270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:05.822273 | orchestrator | Wednesday 07 January 2026 01:08:55 +0000 (0:00:00.612) 0:00:01.604 ***** 2026-01-07 01:17:05.822277 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-07 01:17:05.822280 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:17:05.822283 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:17:05.822286 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:17:05.822289 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-07 01:17:05.822292 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-07 01:17:05.822295 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-07 01:17:05.822298 | orchestrator | 2026-01-07 01:17:05.822302 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-07 01:17:05.822305 | orchestrator | 2026-01-07 01:17:05.822308 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:17:05.822311 | orchestrator | Wednesday 07 January 2026 01:08:56 +0000 (0:00:00.805) 0:00:02.410 ***** 2026-01-07 01:17:05.822314 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.822317 | orchestrator | 2026-01-07 01:17:05.822320 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-07 01:17:05.822323 | orchestrator | Wednesday 07 January 2026 01:08:57 +0000 (0:00:00.734) 0:00:03.144 ***** 2026-01-07 01:17:05.822332 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-07 01:17:05.822343 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-07 01:17:05.822346 | orchestrator | 2026-01-07 01:17:05.822350 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-07 01:17:05.822353 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:03.990) 0:00:07.135 ***** 2026-01-07 01:17:05.822356 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:17:05.822360 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:17:05.822365 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822370 | orchestrator | 2026-01-07 01:17:05.822378 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:17:05.822384 | orchestrator | Wednesday 07 January 2026 01:09:05 +0000 (0:00:04.068) 0:00:11.203 ***** 2026-01-07 01:17:05.822389 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822395 | orchestrator | 2026-01-07 01:17:05.822399 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-07 01:17:05.822404 | orchestrator | Wednesday 07 January 2026 01:09:06 +0000 (0:00:00.869) 0:00:12.073 ***** 2026-01-07 01:17:05.822409 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822414 | orchestrator | 2026-01-07 01:17:05.822419 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-07 01:17:05.822424 | orchestrator | Wednesday 07 January 2026 01:09:07 +0000 (0:00:01.315) 0:00:13.389 ***** 2026-01-07 01:17:05.822458 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822491 | orchestrator | 2026-01-07 01:17:05.822497 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:05.822503 | orchestrator | Wednesday 07 January 2026 01:09:10 +0000 (0:00:02.801) 0:00:16.191 ***** 2026-01-07 01:17:05.822508 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.822513 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822519 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822525 | orchestrator | 2026-01-07 01:17:05.822574 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:17:05.822605 | orchestrator | Wednesday 07 January 2026 01:09:10 +0000 (0:00:00.390) 0:00:16.581 ***** 2026-01-07 01:17:05.822612 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822618 | orchestrator | 2026-01-07 01:17:05.822624 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-07 01:17:05.822629 | orchestrator | Wednesday 07 January 2026 01:09:41 +0000 (0:00:30.473) 0:00:47.055 ***** 2026-01-07 01:17:05.822635 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822640 | orchestrator | 2026-01-07 01:17:05.822646 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:05.822652 | orchestrator | Wednesday 07 January 2026 01:09:56 +0000 (0:00:14.938) 0:01:01.993 ***** 2026-01-07 01:17:05.822657 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822662 | orchestrator | 2026-01-07 01:17:05.822667 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:05.822673 | orchestrator | Wednesday 07 January 2026 01:10:10 +0000 (0:00:13.936) 0:01:15.930 ***** 2026-01-07 01:17:05.822678 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822684 | orchestrator | 2026-01-07 01:17:05.822689 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-07 01:17:05.822694 | orchestrator | Wednesday 07 January 2026 01:10:11 +0000 (0:00:01.185) 0:01:17.116 ***** 2026-01-07 01:17:05.822699 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.822738 | orchestrator | 2026-01-07 01:17:05.822745 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:05.822750 | orchestrator | Wednesday 07 January 2026 01:10:11 +0000 (0:00:00.480) 0:01:17.596 ***** 2026-01-07 01:17:05.822756 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.822762 | orchestrator | 2026-01-07 01:17:05.822768 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:17:05.822773 | orchestrator | Wednesday 07 January 2026 01:10:12 +0000 (0:00:00.561) 0:01:18.158 ***** 2026-01-07 01:17:05.822778 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.822784 | orchestrator | 2026-01-07 01:17:05.822790 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:17:05.822795 | orchestrator | Wednesday 07 January 2026 01:10:32 +0000 (0:00:20.569) 0:01:38.727 ***** 2026-01-07 01:17:05.822800 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.822805 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822811 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822816 | orchestrator | 2026-01-07 01:17:05.822821 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-07 01:17:05.822826 | orchestrator | 2026-01-07 01:17:05.822831 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:17:05.822836 | orchestrator | Wednesday 07 January 2026 01:10:33 +0000 (0:00:00.311) 0:01:39.038 ***** 2026-01-07 01:17:05.822842 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.822846 | orchestrator | 2026-01-07 01:17:05.822851 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-07 01:17:05.822856 | orchestrator | Wednesday 07 January 2026 01:10:33 +0000 (0:00:00.545) 0:01:39.584 ***** 2026-01-07 01:17:05.822860 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822865 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822870 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822875 | orchestrator | 2026-01-07 01:17:05.822880 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-07 01:17:05.822885 | orchestrator | Wednesday 07 January 2026 01:10:35 +0000 (0:00:01.761) 0:01:41.345 ***** 2026-01-07 01:17:05.822890 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822895 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822901 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.822905 | orchestrator | 2026-01-07 01:17:05.822916 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:17:05.822921 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:02.026) 0:01:43.371 ***** 2026-01-07 01:17:05.822930 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.822938 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822950 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822956 | orchestrator | 2026-01-07 01:17:05.822962 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:17:05.822967 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:00.359) 0:01:43.731 ***** 2026-01-07 01:17:05.822972 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:17:05.822977 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.822982 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:17:05.822987 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.822993 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 01:17:05.822997 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-07 01:17:05.823002 | orchestrator | 2026-01-07 01:17:05.823007 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:17:05.823012 | orchestrator | Wednesday 07 January 2026 01:10:47 +0000 (0:00:09.428) 0:01:53.159 ***** 2026-01-07 01:17:05.823017 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823021 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823026 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823031 | orchestrator | 2026-01-07 01:17:05.823036 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:17:05.823041 | orchestrator | Wednesday 07 January 2026 01:10:47 +0000 (0:00:00.400) 0:01:53.560 ***** 2026-01-07 01:17:05.823046 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 01:17:05.823051 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823056 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:17:05.823061 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823076 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:17:05.823081 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823086 | orchestrator | 2026-01-07 01:17:05.823090 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:17:05.823102 | orchestrator | Wednesday 07 January 2026 01:10:48 +0000 (0:00:00.646) 0:01:54.207 ***** 2026-01-07 01:17:05.823107 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823112 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823117 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.823122 | orchestrator | 2026-01-07 01:17:05.823127 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-07 01:17:05.823132 | orchestrator | Wednesday 07 January 2026 01:10:49 +0000 (0:00:00.736) 0:01:54.943 ***** 2026-01-07 01:17:05.823137 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823155 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823214 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.823220 | orchestrator | 2026-01-07 01:17:05.823225 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-07 01:17:05.823231 | orchestrator | Wednesday 07 January 2026 01:10:50 +0000 (0:00:00.947) 0:01:55.891 ***** 2026-01-07 01:17:05.823236 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823241 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823246 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.823251 | orchestrator | 2026-01-07 01:17:05.823256 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-07 01:17:05.823262 | orchestrator | Wednesday 07 January 2026 01:10:52 +0000 (0:00:02.243) 0:01:58.135 ***** 2026-01-07 01:17:05.823267 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823272 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823278 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.823287 | orchestrator | 2026-01-07 01:17:05.823293 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:05.823298 | orchestrator | Wednesday 07 January 2026 01:11:14 +0000 (0:00:22.341) 0:02:20.477 ***** 2026-01-07 01:17:05.823303 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823308 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823313 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.823318 | orchestrator | 2026-01-07 01:17:05.823323 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:05.823328 | orchestrator | Wednesday 07 January 2026 01:11:26 +0000 (0:00:11.922) 0:02:32.399 ***** 2026-01-07 01:17:05.823333 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.823338 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823343 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823348 | orchestrator | 2026-01-07 01:17:05.823353 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-07 01:17:05.823359 | orchestrator | Wednesday 07 January 2026 01:11:27 +0000 (0:00:00.852) 0:02:33.251 ***** 2026-01-07 01:17:05.823365 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823370 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823375 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.823380 | orchestrator | 2026-01-07 01:17:05.823386 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-07 01:17:05.823392 | orchestrator | Wednesday 07 January 2026 01:11:40 +0000 (0:00:13.180) 0:02:46.431 ***** 2026-01-07 01:17:05.823397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823403 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823408 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823425 | orchestrator | 2026-01-07 01:17:05.823431 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:17:05.823436 | orchestrator | Wednesday 07 January 2026 01:11:41 +0000 (0:00:01.108) 0:02:47.540 ***** 2026-01-07 01:17:05.823441 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823446 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823451 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823456 | orchestrator | 2026-01-07 01:17:05.823461 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-07 01:17:05.823466 | orchestrator | 2026-01-07 01:17:05.823471 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:05.823476 | orchestrator | Wednesday 07 January 2026 01:11:42 +0000 (0:00:00.545) 0:02:48.085 ***** 2026-01-07 01:17:05.823482 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.823488 | orchestrator | 2026-01-07 01:17:05.823501 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-07 01:17:05.823505 | orchestrator | Wednesday 07 January 2026 01:11:42 +0000 (0:00:00.521) 0:02:48.607 ***** 2026-01-07 01:17:05.823508 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-07 01:17:05.823511 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-07 01:17:05.823514 | orchestrator | 2026-01-07 01:17:05.823517 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-07 01:17:05.823520 | orchestrator | Wednesday 07 January 2026 01:11:45 +0000 (0:00:03.079) 0:02:51.687 ***** 2026-01-07 01:17:05.823523 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-07 01:17:05.823527 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-07 01:17:05.823530 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-07 01:17:05.823534 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-07 01:17:05.823537 | orchestrator | 2026-01-07 01:17:05.823544 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-07 01:17:05.823549 | orchestrator | Wednesday 07 January 2026 01:11:51 +0000 (0:00:05.281) 0:02:56.968 ***** 2026-01-07 01:17:05.823554 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:17:05.823559 | orchestrator | 2026-01-07 01:17:05.823564 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-07 01:17:05.823569 | orchestrator | Wednesday 07 January 2026 01:11:53 +0000 (0:00:02.641) 0:02:59.609 ***** 2026-01-07 01:17:05.823574 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:17:05.823580 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-07 01:17:05.823585 | orchestrator | 2026-01-07 01:17:05.823590 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-07 01:17:05.823595 | orchestrator | Wednesday 07 January 2026 01:11:57 +0000 (0:00:03.589) 0:03:03.199 ***** 2026-01-07 01:17:05.823600 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:17:05.823605 | orchestrator | 2026-01-07 01:17:05.823610 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-07 01:17:05.823615 | orchestrator | Wednesday 07 January 2026 01:12:01 +0000 (0:00:03.922) 0:03:07.121 ***** 2026-01-07 01:17:05.823620 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-07 01:17:05.823625 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-07 01:17:05.823630 | orchestrator | 2026-01-07 01:17:05.823636 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:17:05.823641 | orchestrator | Wednesday 07 January 2026 01:12:08 +0000 (0:00:07.120) 0:03:14.242 ***** 2026-01-07 01:17:05.823649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823699 | orchestrator | 2026-01-07 01:17:05.823716 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-07 01:17:05.823721 | orchestrator | Wednesday 07 January 2026 01:12:09 +0000 (0:00:01.392) 0:03:15.634 ***** 2026-01-07 01:17:05.823726 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823731 | orchestrator | 2026-01-07 01:17:05.823737 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-07 01:17:05.823742 | orchestrator | Wednesday 07 January 2026 01:12:09 +0000 (0:00:00.136) 0:03:15.771 ***** 2026-01-07 01:17:05.823747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823752 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823757 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823763 | orchestrator | 2026-01-07 01:17:05.823768 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-07 01:17:05.823780 | orchestrator | Wednesday 07 January 2026 01:12:10 +0000 (0:00:00.275) 0:03:16.046 ***** 2026-01-07 01:17:05.823789 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:17:05.823794 | orchestrator | 2026-01-07 01:17:05.823799 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-07 01:17:05.823805 | orchestrator | Wednesday 07 January 2026 01:12:11 +0000 (0:00:00.928) 0:03:16.975 ***** 2026-01-07 01:17:05.823810 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.823815 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.823820 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.823825 | orchestrator | 2026-01-07 01:17:05.823830 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:05.823835 | orchestrator | Wednesday 07 January 2026 01:12:11 +0000 (0:00:00.336) 0:03:17.311 ***** 2026-01-07 01:17:05.823840 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.823845 | orchestrator | 2026-01-07 01:17:05.823851 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:17:05.823856 | orchestrator | Wednesday 07 January 2026 01:12:12 +0000 (0:00:00.586) 0:03:17.898 ***** 2026-01-07 01:17:05.823861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.823974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.823992 | orchestrator | 2026-01-07 01:17:05.823997 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:05.824003 | orchestrator | Wednesday 07 January 2026 01:12:14 +0000 (0:00:02.760) 0:03:20.659 ***** 2026-01-07 01:17:05.824009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824024 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.824034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824068 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.824074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824089 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.824094 | orchestrator | 2026-01-07 01:17:05.824099 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:17:05.824107 | orchestrator | Wednesday 07 January 2026 01:12:15 +0000 (0:00:00.498) 0:03:21.158 ***** 2026-01-07 01:17:05.824536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824560 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.824565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824622 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.824636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824648 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.824654 | orchestrator | 2026-01-07 01:17:05.824660 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-07 01:17:05.824667 | orchestrator | Wednesday 07 January 2026 01:12:15 +0000 (0:00:00.687) 0:03:21.845 ***** 2026-01-07 01:17:05.824672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824736 | orchestrator | 2026-01-07 01:17:05.824741 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-07 01:17:05.824746 | orchestrator | Wednesday 07 January 2026 01:12:18 +0000 (0:00:02.185) 0:03:24.031 ***** 2026-01-07 01:17:05.824772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.824814 | orchestrator | 2026-01-07 01:17:05.824819 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-07 01:17:05.824824 | orchestrator | Wednesday 07 January 2026 01:12:23 +0000 (0:00:05.358) 0:03:29.389 ***** 2026-01-07 01:17:05.824830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824843 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.824849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824860 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.824875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:05.824881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.824886 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.824895 | orchestrator | 2026-01-07 01:17:05.824900 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-07 01:17:05.824905 | orchestrator | Wednesday 07 January 2026 01:12:24 +0000 (0:00:00.556) 0:03:29.946 ***** 2026-01-07 01:17:05.824910 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.824915 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.824920 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.824926 | orchestrator | 2026-01-07 01:17:05.824931 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-07 01:17:05.824936 | orchestrator | Wednesday 07 January 2026 01:12:25 +0000 (0:00:01.401) 0:03:31.348 ***** 2026-01-07 01:17:05.824941 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.824946 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.824951 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.824956 | orchestrator | 2026-01-07 01:17:05.824961 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-07 01:17:05.824966 | orchestrator | Wednesday 07 January 2026 01:12:25 +0000 (0:00:00.297) 0:03:31.646 ***** 2026-01-07 01:17:05.824972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:05.824997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825013 | orchestrator | 2026-01-07 01:17:05.825018 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:05.825023 | orchestrator | Wednesday 07 January 2026 01:12:28 +0000 (0:00:02.319) 0:03:33.965 ***** 2026-01-07 01:17:05.825028 | orchestrator | 2026-01-07 01:17:05.825034 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:05.825043 | orchestrator | Wednesday 07 January 2026 01:12:28 +0000 (0:00:00.135) 0:03:34.101 ***** 2026-01-07 01:17:05.825049 | orchestrator | 2026-01-07 01:17:05.825054 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:05.825059 | orchestrator | Wednesday 07 January 2026 01:12:28 +0000 (0:00:00.128) 0:03:34.230 ***** 2026-01-07 01:17:05.825063 | orchestrator | 2026-01-07 01:17:05.825069 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-07 01:17:05.825074 | orchestrator | Wednesday 07 January 2026 01:12:28 +0000 (0:00:00.135) 0:03:34.365 ***** 2026-01-07 01:17:05.825080 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.825085 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.825090 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.825093 | orchestrator | 2026-01-07 01:17:05.825096 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-07 01:17:05.825100 | orchestrator | Wednesday 07 January 2026 01:12:43 +0000 (0:00:14.570) 0:03:48.936 ***** 2026-01-07 01:17:05.825105 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.825108 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.825111 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.825114 | orchestrator | 2026-01-07 01:17:05.825118 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-07 01:17:05.825121 | orchestrator | 2026-01-07 01:17:05.825124 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:05.825127 | orchestrator | Wednesday 07 January 2026 01:12:51 +0000 (0:00:08.212) 0:03:57.148 ***** 2026-01-07 01:17:05.825131 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.825135 | orchestrator | 2026-01-07 01:17:05.825138 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:05.825141 | orchestrator | Wednesday 07 January 2026 01:12:52 +0000 (0:00:01.195) 0:03:58.343 ***** 2026-01-07 01:17:05.825144 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.825147 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.825150 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.825154 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825158 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825161 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825165 | orchestrator | 2026-01-07 01:17:05.825169 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-07 01:17:05.825173 | orchestrator | Wednesday 07 January 2026 01:12:53 +0000 (0:00:00.623) 0:03:58.966 ***** 2026-01-07 01:17:05.825176 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825180 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825183 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825187 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:17:05.825191 | orchestrator | 2026-01-07 01:17:05.825195 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 01:17:05.825198 | orchestrator | Wednesday 07 January 2026 01:12:54 +0000 (0:00:00.970) 0:03:59.937 ***** 2026-01-07 01:17:05.825202 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:17:05.825206 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:17:05.825210 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:17:05.825213 | orchestrator | 2026-01-07 01:17:05.825217 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 01:17:05.825221 | orchestrator | Wednesday 07 January 2026 01:12:54 +0000 (0:00:00.636) 0:04:00.573 ***** 2026-01-07 01:17:05.825224 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:17:05.825228 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:17:05.825231 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:17:05.825235 | orchestrator | 2026-01-07 01:17:05.825239 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 01:17:05.825242 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:01.301) 0:04:01.875 ***** 2026-01-07 01:17:05.825246 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-07 01:17:05.825250 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.825253 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-07 01:17:05.825257 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.825260 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-07 01:17:05.825263 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.825267 | orchestrator | 2026-01-07 01:17:05.825271 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-07 01:17:05.825274 | orchestrator | Wednesday 07 January 2026 01:12:56 +0000 (0:00:00.524) 0:04:02.399 ***** 2026-01-07 01:17:05.825278 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:05.825284 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:05.825287 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825291 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:05.825295 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:05.825298 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:05.825302 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825306 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:05.825309 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:05.825313 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:05.825317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825320 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:05.825329 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:05.825334 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:05.825339 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:05.825344 | orchestrator | 2026-01-07 01:17:05.825350 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-07 01:17:05.825355 | orchestrator | Wednesday 07 January 2026 01:12:58 +0000 (0:00:02.039) 0:04:04.439 ***** 2026-01-07 01:17:05.825359 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825364 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825369 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825374 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.825379 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.825384 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.825390 | orchestrator | 2026-01-07 01:17:05.825396 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-07 01:17:05.825401 | orchestrator | Wednesday 07 January 2026 01:12:59 +0000 (0:00:01.097) 0:04:05.537 ***** 2026-01-07 01:17:05.825407 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825412 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825418 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825424 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.825429 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.825434 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.825439 | orchestrator | 2026-01-07 01:17:05.825445 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:17:05.825450 | orchestrator | Wednesday 07 January 2026 01:13:01 +0000 (0:00:01.644) 0:04:07.181 ***** 2026-01-07 01:17:05.825457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825569 | orchestrator | 2026-01-07 01:17:05.825574 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:05.825579 | orchestrator | Wednesday 07 January 2026 01:13:03 +0000 (0:00:02.128) 0:04:09.309 ***** 2026-01-07 01:17:05.825584 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:05.825590 | orchestrator | 2026-01-07 01:17:05.825595 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:17:05.825600 | orchestrator | Wednesday 07 January 2026 01:13:04 +0000 (0:00:01.195) 0:04:10.505 ***** 2026-01-07 01:17:05.825612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.825745 | orchestrator | 2026-01-07 01:17:05.825750 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:05.825755 | orchestrator | Wednesday 07 January 2026 01:13:07 +0000 (0:00:03.357) 0:04:13.862 ***** 2026-01-07 01:17:05.825760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.825766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.825771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825776 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.825787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.825792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.825801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825806 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.825811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.825816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.825827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825832 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.825837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.825845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825850 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.825855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.825860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825865 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.825870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.825875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.825880 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.825884 | orchestrator | 2026-01-07 01:17:05.825889 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:17:05.826008 | orchestrator | Wednesday 07 January 2026 01:13:09 +0000 (0:00:01.634) 0:04:15.497 ***** 2026-01-07 01:17:05.826061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.826073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.826079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826085 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.826096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.826110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826120 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.826125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.826131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.826136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.826147 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.826153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826158 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.826169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.826179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826184 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.826190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.826195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.826201 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.826206 | orchestrator | 2026-01-07 01:17:05.826211 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:05.826218 | orchestrator | Wednesday 07 January 2026 01:13:11 +0000 (0:00:02.307) 0:04:17.805 ***** 2026-01-07 01:17:05.826223 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.826229 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.826234 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.826239 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:17:05.826244 | orchestrator | 2026-01-07 01:17:05.826249 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-07 01:17:05.826254 | orchestrator | Wednesday 07 January 2026 01:13:12 +0000 (0:00:01.049) 0:04:18.854 ***** 2026-01-07 01:17:05.826259 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:17:05.826264 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:05.826269 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:17:05.826275 | orchestrator | 2026-01-07 01:17:05.826281 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-07 01:17:05.826286 | orchestrator | Wednesday 07 January 2026 01:13:13 +0000 (0:00:01.021) 0:04:19.875 ***** 2026-01-07 01:17:05.826292 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:05.826297 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:17:05.826302 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:17:05.826311 | orchestrator | 2026-01-07 01:17:05.826316 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-07 01:17:05.826321 | orchestrator | Wednesday 07 January 2026 01:13:14 +0000 (0:00:00.956) 0:04:20.831 ***** 2026-01-07 01:17:05.826326 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:05.826330 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:05.826335 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:05.826340 | orchestrator | 2026-01-07 01:17:05.826345 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-07 01:17:05.826350 | orchestrator | Wednesday 07 January 2026 01:13:15 +0000 (0:00:00.477) 0:04:21.309 ***** 2026-01-07 01:17:05.826354 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:05.826359 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:05.826363 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:05.826368 | orchestrator | 2026-01-07 01:17:05.826374 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-07 01:17:05.826380 | orchestrator | Wednesday 07 January 2026 01:13:16 +0000 (0:00:00.761) 0:04:22.071 ***** 2026-01-07 01:17:05.826385 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:05.826395 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:05.826403 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:05.826408 | orchestrator | 2026-01-07 01:17:05.826413 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-07 01:17:05.826419 | orchestrator | Wednesday 07 January 2026 01:13:17 +0000 (0:00:01.087) 0:04:23.159 ***** 2026-01-07 01:17:05.826424 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:05.826429 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:05.826435 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:05.826440 | orchestrator | 2026-01-07 01:17:05.826446 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-07 01:17:05.826452 | orchestrator | Wednesday 07 January 2026 01:13:18 +0000 (0:00:01.039) 0:04:24.198 ***** 2026-01-07 01:17:05.826457 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:05.826463 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:05.826468 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:05.826472 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-07 01:17:05.826477 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-07 01:17:05.826481 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-07 01:17:05.826486 | orchestrator | 2026-01-07 01:17:05.826491 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-07 01:17:05.826496 | orchestrator | Wednesday 07 January 2026 01:13:21 +0000 (0:00:03.615) 0:04:27.813 ***** 2026-01-07 01:17:05.826501 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826506 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.826511 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.826516 | orchestrator | 2026-01-07 01:17:05.826522 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-07 01:17:05.826527 | orchestrator | Wednesday 07 January 2026 01:13:22 +0000 (0:00:00.554) 0:04:28.368 ***** 2026-01-07 01:17:05.826533 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826539 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.826545 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.826551 | orchestrator | 2026-01-07 01:17:05.826557 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-07 01:17:05.826562 | orchestrator | Wednesday 07 January 2026 01:13:22 +0000 (0:00:00.321) 0:04:28.690 ***** 2026-01-07 01:17:05.826568 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.826573 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.826579 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.826584 | orchestrator | 2026-01-07 01:17:05.826590 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-07 01:17:05.826603 | orchestrator | Wednesday 07 January 2026 01:13:23 +0000 (0:00:01.163) 0:04:29.853 ***** 2026-01-07 01:17:05.826609 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:05.826615 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:05.826620 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:05.826626 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:05.826632 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:05.826638 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:05.826643 | orchestrator | 2026-01-07 01:17:05.826649 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-07 01:17:05.826655 | orchestrator | Wednesday 07 January 2026 01:13:27 +0000 (0:00:03.189) 0:04:33.043 ***** 2026-01-07 01:17:05.826660 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:17:05.826666 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:17:05.826670 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:17:05.826675 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:17:05.826680 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.826684 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:17:05.826689 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.826695 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:17:05.826701 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.826718 | orchestrator | 2026-01-07 01:17:05.826723 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-07 01:17:05.826729 | orchestrator | Wednesday 07 January 2026 01:13:30 +0000 (0:00:03.353) 0:04:36.397 ***** 2026-01-07 01:17:05.826734 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826740 | orchestrator | 2026-01-07 01:17:05.826745 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-07 01:17:05.826751 | orchestrator | Wednesday 07 January 2026 01:13:30 +0000 (0:00:00.130) 0:04:36.527 ***** 2026-01-07 01:17:05.826755 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826761 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.826766 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.826771 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.826776 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.826782 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.826788 | orchestrator | 2026-01-07 01:17:05.826794 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-07 01:17:05.826809 | orchestrator | Wednesday 07 January 2026 01:13:31 +0000 (0:00:00.598) 0:04:37.126 ***** 2026-01-07 01:17:05.826815 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:05.826822 | orchestrator | 2026-01-07 01:17:05.826827 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-07 01:17:05.826833 | orchestrator | Wednesday 07 January 2026 01:13:31 +0000 (0:00:00.749) 0:04:37.875 ***** 2026-01-07 01:17:05.826838 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.826843 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.826849 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.826854 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.826860 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.826865 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.826876 | orchestrator | 2026-01-07 01:17:05.826881 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-07 01:17:05.826887 | orchestrator | Wednesday 07 January 2026 01:13:32 +0000 (0:00:00.832) 0:04:38.708 ***** 2026-01-07 01:17:05.826893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.826994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827006 | orchestrator | 2026-01-07 01:17:05.827012 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-07 01:17:05.827017 | orchestrator | Wednesday 07 January 2026 01:13:36 +0000 (0:00:03.298) 0:04:42.007 ***** 2026-01-07 01:17:05.827023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.827140 | orchestrator | 2026-01-07 01:17:05.827146 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-07 01:17:05.827152 | orchestrator | Wednesday 07 January 2026 01:13:42 +0000 (0:00:06.368) 0:04:48.375 ***** 2026-01-07 01:17:05.827158 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.827164 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.827169 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.827174 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827180 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827185 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827191 | orchestrator | 2026-01-07 01:17:05.827196 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-07 01:17:05.827202 | orchestrator | Wednesday 07 January 2026 01:13:43 +0000 (0:00:01.423) 0:04:49.798 ***** 2026-01-07 01:17:05.827207 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:05.827213 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:05.827219 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:05.827224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:05.827229 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827235 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:05.827241 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:05.827246 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:05.827252 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827257 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:05.827262 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:05.827268 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827274 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:05.827280 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:05.827286 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:05.827291 | orchestrator | 2026-01-07 01:17:05.827297 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-07 01:17:05.827302 | orchestrator | Wednesday 07 January 2026 01:13:47 +0000 (0:00:03.664) 0:04:53.463 ***** 2026-01-07 01:17:05.827311 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.827317 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.827322 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.827328 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827334 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827339 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827345 | orchestrator | 2026-01-07 01:17:05.827350 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-07 01:17:05.827356 | orchestrator | Wednesday 07 January 2026 01:13:48 +0000 (0:00:00.605) 0:04:54.068 ***** 2026-01-07 01:17:05.827361 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:05.827367 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:05.827373 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:05.827379 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:05.827384 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827390 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827401 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:05.827416 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:05.827421 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827427 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827433 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827438 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827444 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:05.827449 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827455 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827460 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827466 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827472 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827477 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827482 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:05.827488 | orchestrator | 2026-01-07 01:17:05.827493 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-07 01:17:05.827499 | orchestrator | Wednesday 07 January 2026 01:13:53 +0000 (0:00:05.436) 0:04:59.505 ***** 2026-01-07 01:17:05.827504 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:05.827510 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:05.827520 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:05.827526 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:05.827531 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:05.827537 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:05.827542 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:05.827548 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:05.827554 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:05.827560 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:05.827565 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:05.827571 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:05.827576 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:05.827582 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827588 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:05.827593 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827599 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:05.827605 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:05.827611 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:05.827617 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827623 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:05.827628 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:05.827634 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:05.827639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:05.827644 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:05.827650 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:05.827656 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:05.827661 | orchestrator | 2026-01-07 01:17:05.827667 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-07 01:17:05.827672 | orchestrator | Wednesday 07 January 2026 01:14:00 +0000 (0:00:06.877) 0:05:06.383 ***** 2026-01-07 01:17:05.827677 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.827683 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.827689 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.827698 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827732 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827743 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827749 | orchestrator | 2026-01-07 01:17:05.827754 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-07 01:17:05.827760 | orchestrator | Wednesday 07 January 2026 01:14:01 +0000 (0:00:00.779) 0:05:07.162 ***** 2026-01-07 01:17:05.827765 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.827771 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.827776 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.827782 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827787 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827796 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827802 | orchestrator | 2026-01-07 01:17:05.827807 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-07 01:17:05.827813 | orchestrator | Wednesday 07 January 2026 01:14:01 +0000 (0:00:00.581) 0:05:07.744 ***** 2026-01-07 01:17:05.827819 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827825 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827830 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.827836 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.827842 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.827847 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.827853 | orchestrator | 2026-01-07 01:17:05.827858 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-07 01:17:05.827863 | orchestrator | Wednesday 07 January 2026 01:14:04 +0000 (0:00:02.140) 0:05:09.885 ***** 2026-01-07 01:17:05.827870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.827889 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.827902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.827922 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.827928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:05.827934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:05.827940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.827947 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.827959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.827969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.827974 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.827980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.827986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.827992 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.827998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:05.828004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:05.828010 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828015 | orchestrator | 2026-01-07 01:17:05.828021 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-07 01:17:05.828027 | orchestrator | Wednesday 07 January 2026 01:14:05 +0000 (0:00:01.301) 0:05:11.187 ***** 2026-01-07 01:17:05.828033 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:17:05.828042 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828047 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828053 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:17:05.828058 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828064 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.828069 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:17:05.828075 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828080 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.828085 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:17:05.828091 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828098 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.828105 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:17:05.828110 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828115 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.828120 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:17:05.828126 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:17:05.828132 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828138 | orchestrator | 2026-01-07 01:17:05.828144 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-07 01:17:05.828150 | orchestrator | Wednesday 07 January 2026 01:14:06 +0000 (0:00:00.843) 0:05:12.030 ***** 2026-01-07 01:17:05.828156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', '2026-01-07 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:05.828221 | orchestrator | value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:05.828322 | orchestrator | 2026-01-07 01:17:05.828328 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:05.828334 | orchestrator | Wednesday 07 January 2026 01:14:08 +0000 (0:00:02.466) 0:05:14.497 ***** 2026-01-07 01:17:05.828340 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828345 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.828351 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.828357 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.828362 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.828366 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828370 | orchestrator | 2026-01-07 01:17:05.828375 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828380 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:00.791) 0:05:15.288 ***** 2026-01-07 01:17:05.828384 | orchestrator | 2026-01-07 01:17:05.828389 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828394 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:00.137) 0:05:15.425 ***** 2026-01-07 01:17:05.828398 | orchestrator | 2026-01-07 01:17:05.828404 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828409 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:00.138) 0:05:15.564 ***** 2026-01-07 01:17:05.828413 | orchestrator | 2026-01-07 01:17:05.828418 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828423 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:00.131) 0:05:15.696 ***** 2026-01-07 01:17:05.828429 | orchestrator | 2026-01-07 01:17:05.828434 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828439 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:00.130) 0:05:15.827 ***** 2026-01-07 01:17:05.828443 | orchestrator | 2026-01-07 01:17:05.828448 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:05.828453 | orchestrator | Wednesday 07 January 2026 01:14:10 +0000 (0:00:00.130) 0:05:15.957 ***** 2026-01-07 01:17:05.828458 | orchestrator | 2026-01-07 01:17:05.828463 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-07 01:17:05.828473 | orchestrator | Wednesday 07 January 2026 01:14:10 +0000 (0:00:00.342) 0:05:16.300 ***** 2026-01-07 01:17:05.828478 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.828483 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.828488 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.828493 | orchestrator | 2026-01-07 01:17:05.828497 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-07 01:17:05.828502 | orchestrator | Wednesday 07 January 2026 01:14:17 +0000 (0:00:06.908) 0:05:23.208 ***** 2026-01-07 01:17:05.828507 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.828512 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.828517 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.828523 | orchestrator | 2026-01-07 01:17:05.828528 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-07 01:17:05.828533 | orchestrator | Wednesday 07 January 2026 01:14:29 +0000 (0:00:11.935) 0:05:35.143 ***** 2026-01-07 01:17:05.828537 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.828543 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.828548 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.828554 | orchestrator | 2026-01-07 01:17:05.828558 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-07 01:17:05.828563 | orchestrator | Wednesday 07 January 2026 01:14:50 +0000 (0:00:21.042) 0:05:56.186 ***** 2026-01-07 01:17:05.828568 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.828573 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.828582 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.828587 | orchestrator | 2026-01-07 01:17:05.828592 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-07 01:17:05.828597 | orchestrator | Wednesday 07 January 2026 01:15:20 +0000 (0:00:30.214) 0:06:26.401 ***** 2026-01-07 01:17:05.828601 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.828606 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.828611 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.828616 | orchestrator | 2026-01-07 01:17:05.828621 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-07 01:17:05.828627 | orchestrator | Wednesday 07 January 2026 01:15:21 +0000 (0:00:00.667) 0:06:27.068 ***** 2026-01-07 01:17:05.828633 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.828639 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.828643 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.828648 | orchestrator | 2026-01-07 01:17:05.828653 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-07 01:17:05.828658 | orchestrator | Wednesday 07 January 2026 01:15:21 +0000 (0:00:00.644) 0:06:27.713 ***** 2026-01-07 01:17:05.828663 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:05.828668 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:05.828672 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:05.828677 | orchestrator | 2026-01-07 01:17:05.828682 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-07 01:17:05.828687 | orchestrator | Wednesday 07 January 2026 01:15:53 +0000 (0:00:31.754) 0:06:59.468 ***** 2026-01-07 01:17:05.828693 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828699 | orchestrator | 2026-01-07 01:17:05.828732 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-07 01:17:05.828743 | orchestrator | Wednesday 07 January 2026 01:15:53 +0000 (0:00:00.129) 0:06:59.598 ***** 2026-01-07 01:17:05.828748 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.828754 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.828759 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828764 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828770 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.828776 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-07 01:17:05.828782 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:05.828787 | orchestrator | 2026-01-07 01:17:05.828792 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-07 01:17:05.828797 | orchestrator | Wednesday 07 January 2026 01:16:14 +0000 (0:00:20.891) 0:07:20.489 ***** 2026-01-07 01:17:05.828802 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.828807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.828812 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.828817 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828822 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.828827 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828832 | orchestrator | 2026-01-07 01:17:05.828837 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-07 01:17:05.828842 | orchestrator | Wednesday 07 January 2026 01:16:23 +0000 (0:00:08.557) 0:07:29.047 ***** 2026-01-07 01:17:05.828847 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.828852 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.828857 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.828862 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.828867 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.828872 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-01-07 01:17:05.828877 | orchestrator | 2026-01-07 01:17:05.828883 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:05.828892 | orchestrator | Wednesday 07 January 2026 01:16:26 +0000 (0:00:03.495) 0:07:32.542 ***** 2026-01-07 01:17:05.828897 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:05.828902 | orchestrator | 2026-01-07 01:17:05.828907 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:05.828913 | orchestrator | Wednesday 07 January 2026 01:16:41 +0000 (0:00:14.771) 0:07:47.314 ***** 2026-01-07 01:17:05.828918 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:05.828923 | orchestrator | 2026-01-07 01:17:05.828929 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-07 01:17:05.828934 | orchestrator | Wednesday 07 January 2026 01:16:42 +0000 (0:00:01.261) 0:07:48.575 ***** 2026-01-07 01:17:05.828939 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.828944 | orchestrator | 2026-01-07 01:17:05.828952 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-07 01:17:05.828957 | orchestrator | Wednesday 07 January 2026 01:16:43 +0000 (0:00:01.178) 0:07:49.754 ***** 2026-01-07 01:17:05.828962 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:05.828968 | orchestrator | 2026-01-07 01:17:05.828973 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-07 01:17:05.828978 | orchestrator | Wednesday 07 January 2026 01:16:56 +0000 (0:00:12.495) 0:08:02.250 ***** 2026-01-07 01:17:05.828983 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:05.828988 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:05.828993 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:05.828998 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:05.829003 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:05.829009 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:05.829014 | orchestrator | 2026-01-07 01:17:05.829019 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-07 01:17:05.829024 | orchestrator | 2026-01-07 01:17:05.829029 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-07 01:17:05.829035 | orchestrator | Wednesday 07 January 2026 01:16:58 +0000 (0:00:01.695) 0:08:03.946 ***** 2026-01-07 01:17:05.829040 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:05.829045 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:05.829050 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:05.829055 | orchestrator | 2026-01-07 01:17:05.829061 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-07 01:17:05.829065 | orchestrator | 2026-01-07 01:17:05.829071 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-07 01:17:05.829076 | orchestrator | Wednesday 07 January 2026 01:16:59 +0000 (0:00:01.143) 0:08:05.089 ***** 2026-01-07 01:17:05.829081 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.829086 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.829092 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.829097 | orchestrator | 2026-01-07 01:17:05.829103 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-07 01:17:05.829108 | orchestrator | 2026-01-07 01:17:05.829114 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-07 01:17:05.829119 | orchestrator | Wednesday 07 January 2026 01:16:59 +0000 (0:00:00.483) 0:08:05.573 ***** 2026-01-07 01:17:05.829124 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-07 01:17:05.829129 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:17:05.829132 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829135 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-07 01:17:05.829138 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-07 01:17:05.829141 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829144 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-07 01:17:05.829148 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:17:05.829153 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829159 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-07 01:17:05.829162 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-07 01:17:05.829165 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829168 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:05.829171 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-07 01:17:05.829174 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:17:05.829177 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829181 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-07 01:17:05.829184 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-07 01:17:05.829187 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829190 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:05.829193 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-07 01:17:05.829196 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:17:05.829199 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829202 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-07 01:17:05.829205 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-07 01:17:05.829208 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829211 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:05.829214 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-07 01:17:05.829217 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:17:05.829221 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829224 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-07 01:17:05.829227 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-07 01:17:05.829230 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829233 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.829236 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.829239 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-07 01:17:05.829242 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:17:05.829245 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:17:05.829248 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-07 01:17:05.829251 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-07 01:17:05.829256 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:05.829260 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.829263 | orchestrator | 2026-01-07 01:17:05.829266 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-07 01:17:05.829269 | orchestrator | 2026-01-07 01:17:05.829272 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-07 01:17:05.829275 | orchestrator | Wednesday 07 January 2026 01:17:01 +0000 (0:00:01.367) 0:08:06.941 ***** 2026-01-07 01:17:05.829278 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-07 01:17:05.829281 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-07 01:17:05.829285 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.829288 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-07 01:17:05.829291 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-07 01:17:05.829294 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.829297 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-07 01:17:05.829302 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-07 01:17:05.829305 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.829309 | orchestrator | 2026-01-07 01:17:05.829312 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-07 01:17:05.829315 | orchestrator | 2026-01-07 01:17:05.829318 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-07 01:17:05.829321 | orchestrator | Wednesday 07 January 2026 01:17:01 +0000 (0:00:00.748) 0:08:07.690 ***** 2026-01-07 01:17:05.829324 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.829327 | orchestrator | 2026-01-07 01:17:05.829330 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-07 01:17:05.829333 | orchestrator | 2026-01-07 01:17:05.829336 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-07 01:17:05.829339 | orchestrator | Wednesday 07 January 2026 01:17:02 +0000 (0:00:00.747) 0:08:08.437 ***** 2026-01-07 01:17:05.829342 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:05.829345 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:05.829348 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:05.829351 | orchestrator | 2026-01-07 01:17:05.829355 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:05.829358 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:05.829361 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-07 01:17:05.829365 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-07 01:17:05.829368 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-07 01:17:05.829373 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-07 01:17:05.829376 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-07 01:17:05.829379 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-07 01:17:05.829382 | orchestrator | 2026-01-07 01:17:05.829385 | orchestrator | 2026-01-07 01:17:05.829388 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:05.829392 | orchestrator | Wednesday 07 January 2026 01:17:03 +0000 (0:00:00.443) 0:08:08.880 ***** 2026-01-07 01:17:05.829395 | orchestrator | =============================================================================== 2026-01-07 01:17:05.829398 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.75s 2026-01-07 01:17:05.829401 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.47s 2026-01-07 01:17:05.829404 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.21s 2026-01-07 01:17:05.829407 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.34s 2026-01-07 01:17:05.829410 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.04s 2026-01-07 01:17:05.829413 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.89s 2026-01-07 01:17:05.829416 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.57s 2026-01-07 01:17:05.829419 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.94s 2026-01-07 01:17:05.829422 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.77s 2026-01-07 01:17:05.829425 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.57s 2026-01-07 01:17:05.829430 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.94s 2026-01-07 01:17:05.829433 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.18s 2026-01-07 01:17:05.829436 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.50s 2026-01-07 01:17:05.829440 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.94s 2026-01-07 01:17:05.829443 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.92s 2026-01-07 01:17:05.829448 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.43s 2026-01-07 01:17:05.829451 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.56s 2026-01-07 01:17:05.829454 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.21s 2026-01-07 01:17:05.829457 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.12s 2026-01-07 01:17:05.829460 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.91s 2026-01-07 01:17:08.865443 | orchestrator | 2026-01-07 01:17:08 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:08.865496 | orchestrator | 2026-01-07 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:11.909340 | orchestrator | 2026-01-07 01:17:11 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:11.909394 | orchestrator | 2026-01-07 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:14.952210 | orchestrator | 2026-01-07 01:17:14 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:14.952275 | orchestrator | 2026-01-07 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:18.000437 | orchestrator | 2026-01-07 01:17:18 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:18.000499 | orchestrator | 2026-01-07 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:21.049884 | orchestrator | 2026-01-07 01:17:21 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:21.049944 | orchestrator | 2026-01-07 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:24.094819 | orchestrator | 2026-01-07 01:17:24 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:24.094882 | orchestrator | 2026-01-07 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:27.142407 | orchestrator | 2026-01-07 01:17:27 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:27.142460 | orchestrator | 2026-01-07 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:30.193118 | orchestrator | 2026-01-07 01:17:30 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:30.193176 | orchestrator | 2026-01-07 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:33.239896 | orchestrator | 2026-01-07 01:17:33 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state STARTED 2026-01-07 01:17:33.239956 | orchestrator | 2026-01-07 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:36.284191 | orchestrator | 2026-01-07 01:17:36 | INFO  | Task 4032fb2d-7f86-485d-b434-924bae28cb36 is in state SUCCESS 2026-01-07 01:17:36.285550 | orchestrator | 2026-01-07 01:17:36.285609 | orchestrator | 2026-01-07 01:17:36.285631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:36.285639 | orchestrator | 2026-01-07 01:17:36.285647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:36.285655 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:00.280) 0:00:00.280 ***** 2026-01-07 01:17:36.285681 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.285692 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:36.285708 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:36.285726 | orchestrator | 2026-01-07 01:17:36.285738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:36.285750 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:00.299) 0:00:00.579 ***** 2026-01-07 01:17:36.285833 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-07 01:17:36.285846 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-07 01:17:36.285857 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-07 01:17:36.285870 | orchestrator | 2026-01-07 01:17:36.285883 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-07 01:17:36.285895 | orchestrator | 2026-01-07 01:17:36.285908 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.285922 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:00.421) 0:00:01.000 ***** 2026-01-07 01:17:36.285935 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:36.285949 | orchestrator | 2026-01-07 01:17:36.285962 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-07 01:17:36.285974 | orchestrator | Wednesday 07 January 2026 01:12:56 +0000 (0:00:00.579) 0:00:01.580 ***** 2026-01-07 01:17:36.285988 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-07 01:17:36.285996 | orchestrator | 2026-01-07 01:17:36.286003 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-07 01:17:36.286010 | orchestrator | Wednesday 07 January 2026 01:12:59 +0000 (0:00:03.147) 0:00:04.728 ***** 2026-01-07 01:17:36.286070 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-07 01:17:36.286086 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-07 01:17:36.286100 | orchestrator | 2026-01-07 01:17:36.286127 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-07 01:17:36.286141 | orchestrator | Wednesday 07 January 2026 01:13:05 +0000 (0:00:05.947) 0:00:10.675 ***** 2026-01-07 01:17:36.286151 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:17:36.286160 | orchestrator | 2026-01-07 01:17:36.286168 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-07 01:17:36.286175 | orchestrator | Wednesday 07 January 2026 01:13:08 +0000 (0:00:02.950) 0:00:13.626 ***** 2026-01-07 01:17:36.286183 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:17:36.286191 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:17:36.286199 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:17:36.286207 | orchestrator | 2026-01-07 01:17:36.286215 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-07 01:17:36.286222 | orchestrator | Wednesday 07 January 2026 01:13:15 +0000 (0:00:07.363) 0:00:20.990 ***** 2026-01-07 01:17:36.286230 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:17:36.286238 | orchestrator | 2026-01-07 01:17:36.286246 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-07 01:17:36.286253 | orchestrator | Wednesday 07 January 2026 01:13:18 +0000 (0:00:03.060) 0:00:24.050 ***** 2026-01-07 01:17:36.286261 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:17:36.286269 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:17:36.286276 | orchestrator | 2026-01-07 01:17:36.286284 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-07 01:17:36.286292 | orchestrator | Wednesday 07 January 2026 01:13:25 +0000 (0:00:06.762) 0:00:30.813 ***** 2026-01-07 01:17:36.286309 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-07 01:17:36.286317 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-07 01:17:36.286324 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-07 01:17:36.286332 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-07 01:17:36.286339 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-07 01:17:36.286347 | orchestrator | 2026-01-07 01:17:36.286355 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.286367 | orchestrator | Wednesday 07 January 2026 01:13:41 +0000 (0:00:15.868) 0:00:46.682 ***** 2026-01-07 01:17:36.286385 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:36.286397 | orchestrator | 2026-01-07 01:17:36.286408 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-07 01:17:36.286420 | orchestrator | Wednesday 07 January 2026 01:13:42 +0000 (0:00:00.557) 0:00:47.240 ***** 2026-01-07 01:17:36.286433 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286445 | orchestrator | 2026-01-07 01:17:36.286457 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-07 01:17:36.286464 | orchestrator | Wednesday 07 January 2026 01:13:47 +0000 (0:00:05.654) 0:00:52.894 ***** 2026-01-07 01:17:36.286470 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286477 | orchestrator | 2026-01-07 01:17:36.286484 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:17:36.286505 | orchestrator | Wednesday 07 January 2026 01:13:53 +0000 (0:00:05.334) 0:00:58.229 ***** 2026-01-07 01:17:36.286512 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.286519 | orchestrator | 2026-01-07 01:17:36.286526 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-07 01:17:36.286532 | orchestrator | Wednesday 07 January 2026 01:13:56 +0000 (0:00:03.440) 0:01:01.669 ***** 2026-01-07 01:17:36.286548 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:17:36.286555 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:17:36.286562 | orchestrator | 2026-01-07 01:17:36.286568 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-07 01:17:36.286575 | orchestrator | Wednesday 07 January 2026 01:14:06 +0000 (0:00:09.705) 0:01:11.374 ***** 2026-01-07 01:17:36.286582 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-07 01:17:36.286589 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-07 01:17:36.286600 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-07 01:17:36.286608 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-07 01:17:36.286615 | orchestrator | 2026-01-07 01:17:36.286622 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-07 01:17:36.286628 | orchestrator | Wednesday 07 January 2026 01:14:23 +0000 (0:00:17.617) 0:01:28.992 ***** 2026-01-07 01:17:36.286635 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286642 | orchestrator | 2026-01-07 01:17:36.286648 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-07 01:17:36.286655 | orchestrator | Wednesday 07 January 2026 01:14:27 +0000 (0:00:03.820) 0:01:32.812 ***** 2026-01-07 01:17:36.286662 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286668 | orchestrator | 2026-01-07 01:17:36.286675 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-07 01:17:36.286681 | orchestrator | Wednesday 07 January 2026 01:14:33 +0000 (0:00:05.696) 0:01:38.509 ***** 2026-01-07 01:17:36.286698 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.286705 | orchestrator | 2026-01-07 01:17:36.286712 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-07 01:17:36.286719 | orchestrator | Wednesday 07 January 2026 01:14:33 +0000 (0:00:00.200) 0:01:38.709 ***** 2026-01-07 01:17:36.286725 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.286732 | orchestrator | 2026-01-07 01:17:36.286739 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.286745 | orchestrator | Wednesday 07 January 2026 01:14:38 +0000 (0:00:04.780) 0:01:43.490 ***** 2026-01-07 01:17:36.286770 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:36.286778 | orchestrator | 2026-01-07 01:17:36.286785 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-07 01:17:36.286791 | orchestrator | Wednesday 07 January 2026 01:14:39 +0000 (0:00:01.017) 0:01:44.508 ***** 2026-01-07 01:17:36.286798 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.286805 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286811 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.286818 | orchestrator | 2026-01-07 01:17:36.286824 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-07 01:17:36.286831 | orchestrator | Wednesday 07 January 2026 01:14:44 +0000 (0:00:04.868) 0:01:49.377 ***** 2026-01-07 01:17:36.286837 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.286844 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286851 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.286857 | orchestrator | 2026-01-07 01:17:36.286864 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-07 01:17:36.286871 | orchestrator | Wednesday 07 January 2026 01:14:47 +0000 (0:00:03.701) 0:01:53.078 ***** 2026-01-07 01:17:36.286877 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.286884 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.286891 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.286897 | orchestrator | 2026-01-07 01:17:36.286904 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-07 01:17:36.286910 | orchestrator | Wednesday 07 January 2026 01:14:48 +0000 (0:00:00.656) 0:01:53.735 ***** 2026-01-07 01:17:36.286920 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:36.286930 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.286940 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:36.286950 | orchestrator | 2026-01-07 01:17:36.286961 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-07 01:17:36.286972 | orchestrator | Wednesday 07 January 2026 01:14:50 +0000 (0:00:01.920) 0:01:55.655 ***** 2026-01-07 01:17:36.286984 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.286994 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.287005 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.287016 | orchestrator | 2026-01-07 01:17:36.287027 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-07 01:17:36.287039 | orchestrator | Wednesday 07 January 2026 01:14:51 +0000 (0:00:01.221) 0:01:56.876 ***** 2026-01-07 01:17:36.287050 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.287061 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.287073 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.287084 | orchestrator | 2026-01-07 01:17:36.287095 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-07 01:17:36.287107 | orchestrator | Wednesday 07 January 2026 01:14:52 +0000 (0:00:01.116) 0:01:57.993 ***** 2026-01-07 01:17:36.287119 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.287129 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.287139 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.287146 | orchestrator | 2026-01-07 01:17:36.287160 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-07 01:17:36.287168 | orchestrator | Wednesday 07 January 2026 01:14:54 +0000 (0:00:02.053) 0:02:00.046 ***** 2026-01-07 01:17:36.287180 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.287187 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.287194 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.287200 | orchestrator | 2026-01-07 01:17:36.287207 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-07 01:17:36.287214 | orchestrator | Wednesday 07 January 2026 01:14:56 +0000 (0:00:01.725) 0:02:01.772 ***** 2026-01-07 01:17:36.287220 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287227 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:36.287233 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:36.287240 | orchestrator | 2026-01-07 01:17:36.287247 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-07 01:17:36.287253 | orchestrator | Wednesday 07 January 2026 01:14:57 +0000 (0:00:00.658) 0:02:02.430 ***** 2026-01-07 01:17:36.287260 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287267 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:36.287273 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:36.287280 | orchestrator | 2026-01-07 01:17:36.287286 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.287293 | orchestrator | Wednesday 07 January 2026 01:15:00 +0000 (0:00:03.502) 0:02:05.933 ***** 2026-01-07 01:17:36.287300 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:36.287306 | orchestrator | 2026-01-07 01:17:36.287313 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-07 01:17:36.287320 | orchestrator | Wednesday 07 January 2026 01:15:01 +0000 (0:00:00.696) 0:02:06.629 ***** 2026-01-07 01:17:36.287326 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287333 | orchestrator | 2026-01-07 01:17:36.287340 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:17:36.287346 | orchestrator | Wednesday 07 January 2026 01:15:05 +0000 (0:00:03.503) 0:02:10.133 ***** 2026-01-07 01:17:36.287353 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287359 | orchestrator | 2026-01-07 01:17:36.287367 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-07 01:17:36.287379 | orchestrator | Wednesday 07 January 2026 01:15:08 +0000 (0:00:03.069) 0:02:13.202 ***** 2026-01-07 01:17:36.287386 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:17:36.287397 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:17:36.287404 | orchestrator | 2026-01-07 01:17:36.287411 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-07 01:17:36.287417 | orchestrator | Wednesday 07 January 2026 01:15:14 +0000 (0:00:06.722) 0:02:19.925 ***** 2026-01-07 01:17:36.287424 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287431 | orchestrator | 2026-01-07 01:17:36.287437 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-07 01:17:36.287444 | orchestrator | Wednesday 07 January 2026 01:15:18 +0000 (0:00:03.555) 0:02:23.481 ***** 2026-01-07 01:17:36.287451 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:36.287457 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:36.287464 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:36.287470 | orchestrator | 2026-01-07 01:17:36.287477 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-07 01:17:36.287484 | orchestrator | Wednesday 07 January 2026 01:15:18 +0000 (0:00:00.315) 0:02:23.796 ***** 2026-01-07 01:17:36.287493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.287511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.287519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.287527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.287537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.287544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.287552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.287633 | orchestrator | 2026-01-07 01:17:36.287640 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-07 01:17:36.287647 | orchestrator | Wednesday 07 January 2026 01:15:20 +0000 (0:00:02.148) 0:02:25.944 ***** 2026-01-07 01:17:36.287654 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.287660 | orchestrator | 2026-01-07 01:17:36.287671 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-07 01:17:36.287678 | orchestrator | Wednesday 07 January 2026 01:15:20 +0000 (0:00:00.138) 0:02:26.082 ***** 2026-01-07 01:17:36.287684 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.287691 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:36.287698 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:36.287704 | orchestrator | 2026-01-07 01:17:36.287711 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-07 01:17:36.287718 | orchestrator | Wednesday 07 January 2026 01:15:21 +0000 (0:00:00.504) 0:02:26.587 ***** 2026-01-07 01:17:36.287725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.287738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.287765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.287808 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.287828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.287841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.287866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.287915 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:36.287927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.287944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.287952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.287969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.287981 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:36.287988 | orchestrator | 2026-01-07 01:17:36.287995 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.288002 | orchestrator | Wednesday 07 January 2026 01:15:22 +0000 (0:00:00.754) 0:02:27.342 ***** 2026-01-07 01:17:36.288009 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:36.288015 | orchestrator | 2026-01-07 01:17:36.288022 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-07 01:17:36.288029 | orchestrator | Wednesday 07 January 2026 01:15:22 +0000 (0:00:00.582) 0:02:27.924 ***** 2026-01-07 01:17:36.288035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.288485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.288508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.288528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.288536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.288543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.288550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.288631 | orchestrator | 2026-01-07 01:17:36.288638 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:36.288646 | orchestrator | Wednesday 07 January 2026 01:15:28 +0000 (0:00:05.320) 0:02:33.245 ***** 2026-01-07 01:17:36.288653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.288664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.288674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.288681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.288688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.288695 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.288706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.288713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.288724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.288734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.288741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.288748 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:36.289109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.289126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.289145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.289196 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:36.289203 | orchestrator | 2026-01-07 01:17:36.289210 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-07 01:17:36.289217 | orchestrator | Wednesday 07 January 2026 01:15:29 +0000 (0:00:00.959) 0:02:34.204 ***** 2026-01-07 01:17:36.289224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.289232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.289239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.289275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.289282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.289289 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:36.289296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.289326 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.289333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:36.289341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:36.289348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:36.289360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:36.289366 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:36.289372 | orchestrator | 2026-01-07 01:17:36.289378 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-07 01:17:36.289387 | orchestrator | Wednesday 07 January 2026 01:15:30 +0000 (0:00:01.142) 0:02:35.347 ***** 2026-01-07 01:17:36.289397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289443 | orchestrator | changed: [testbed-node-0]2026-01-07 01:17:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:36.289450 | orchestrator | => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289515 | orchestrator | 2026-01-07 01:17:36.289521 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-07 01:17:36.289527 | orchestrator | Wednesday 07 January 2026 01:15:35 +0000 (0:00:05.653) 0:02:41.000 ***** 2026-01-07 01:17:36.289535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:36.289541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:36.289547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:36.289553 | orchestrator | 2026-01-07 01:17:36.289559 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-07 01:17:36.289565 | orchestrator | Wednesday 07 January 2026 01:15:37 +0000 (0:00:02.044) 0:02:43.044 ***** 2026-01-07 01:17:36.289571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.289597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.289618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.289685 | orchestrator | 2026-01-07 01:17:36.289691 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-07 01:17:36.289697 | orchestrator | Wednesday 07 January 2026 01:15:55 +0000 (0:00:18.048) 0:03:01.092 ***** 2026-01-07 01:17:36.289703 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.289709 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.289714 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.289721 | orchestrator | 2026-01-07 01:17:36.289729 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-07 01:17:36.289739 | orchestrator | Wednesday 07 January 2026 01:15:58 +0000 (0:00:02.049) 0:03:03.142 ***** 2026-01-07 01:17:36.289765 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289778 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289787 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289798 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289808 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289818 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289827 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289837 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289848 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289858 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289868 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289879 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289886 | orchestrator | 2026-01-07 01:17:36.289893 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-07 01:17:36.289899 | orchestrator | Wednesday 07 January 2026 01:16:02 +0000 (0:00:04.850) 0:03:07.993 ***** 2026-01-07 01:17:36.289906 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289913 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289919 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.289926 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289933 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289939 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.289946 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289953 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289960 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.289967 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289979 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289990 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:36.289996 | orchestrator | 2026-01-07 01:17:36.290002 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-07 01:17:36.290008 | orchestrator | Wednesday 07 January 2026 01:16:07 +0000 (0:00:05.081) 0:03:13.074 ***** 2026-01-07 01:17:36.290036 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.290043 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.290049 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:36.290055 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.290061 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.290066 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:36.290072 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.290078 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.290083 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:36.290089 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:36.290095 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:36.290100 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:36.290106 | orchestrator | 2026-01-07 01:17:36.290112 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-07 01:17:36.290118 | orchestrator | Wednesday 07 January 2026 01:16:12 +0000 (0:00:04.609) 0:03:17.683 ***** 2026-01-07 01:17:36.290124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.290136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.290142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:36.290156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.290162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.290168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:36.290174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:36.290244 | orchestrator | 2026-01-07 01:17:36.290250 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:36.290256 | orchestrator | Wednesday 07 January 2026 01:16:16 +0000 (0:00:03.675) 0:03:21.359 ***** 2026-01-07 01:17:36.290266 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:36.290272 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:36.290278 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:36.290284 | orchestrator | 2026-01-07 01:17:36.290289 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-07 01:17:36.290295 | orchestrator | Wednesday 07 January 2026 01:16:16 +0000 (0:00:00.640) 0:03:21.999 ***** 2026-01-07 01:17:36.290301 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290307 | orchestrator | 2026-01-07 01:17:36.290313 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-07 01:17:36.290318 | orchestrator | Wednesday 07 January 2026 01:16:19 +0000 (0:00:02.209) 0:03:24.209 ***** 2026-01-07 01:17:36.290324 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290330 | orchestrator | 2026-01-07 01:17:36.290335 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-07 01:17:36.290341 | orchestrator | Wednesday 07 January 2026 01:16:21 +0000 (0:00:02.591) 0:03:26.801 ***** 2026-01-07 01:17:36.290347 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290353 | orchestrator | 2026-01-07 01:17:36.290359 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-07 01:17:36.290364 | orchestrator | Wednesday 07 January 2026 01:16:24 +0000 (0:00:02.946) 0:03:29.747 ***** 2026-01-07 01:17:36.290370 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290376 | orchestrator | 2026-01-07 01:17:36.290382 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-07 01:17:36.290387 | orchestrator | Wednesday 07 January 2026 01:16:27 +0000 (0:00:02.679) 0:03:32.427 ***** 2026-01-07 01:17:36.290393 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290399 | orchestrator | 2026-01-07 01:17:36.290404 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:36.290413 | orchestrator | Wednesday 07 January 2026 01:16:47 +0000 (0:00:20.551) 0:03:52.979 ***** 2026-01-07 01:17:36.290419 | orchestrator | 2026-01-07 01:17:36.290425 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:36.290431 | orchestrator | Wednesday 07 January 2026 01:16:47 +0000 (0:00:00.067) 0:03:53.046 ***** 2026-01-07 01:17:36.290437 | orchestrator | 2026-01-07 01:17:36.290442 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:36.290448 | orchestrator | Wednesday 07 January 2026 01:16:47 +0000 (0:00:00.070) 0:03:53.117 ***** 2026-01-07 01:17:36.290454 | orchestrator | 2026-01-07 01:17:36.290460 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-07 01:17:36.290466 | orchestrator | Wednesday 07 January 2026 01:16:48 +0000 (0:00:00.072) 0:03:53.190 ***** 2026-01-07 01:17:36.290471 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290477 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.290483 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.290489 | orchestrator | 2026-01-07 01:17:36.290495 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-07 01:17:36.290500 | orchestrator | Wednesday 07 January 2026 01:17:02 +0000 (0:00:14.806) 0:04:07.996 ***** 2026-01-07 01:17:36.290506 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290512 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.290518 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.290523 | orchestrator | 2026-01-07 01:17:36.290529 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-07 01:17:36.290535 | orchestrator | Wednesday 07 January 2026 01:17:08 +0000 (0:00:05.527) 0:04:13.524 ***** 2026-01-07 01:17:36.290541 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290546 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.290552 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.290558 | orchestrator | 2026-01-07 01:17:36.290564 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-07 01:17:36.290569 | orchestrator | Wednesday 07 January 2026 01:17:13 +0000 (0:00:05.316) 0:04:18.841 ***** 2026-01-07 01:17:36.290583 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290593 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.290602 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.290610 | orchestrator | 2026-01-07 01:17:36.290619 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-07 01:17:36.290629 | orchestrator | Wednesday 07 January 2026 01:17:23 +0000 (0:00:09.927) 0:04:28.769 ***** 2026-01-07 01:17:36.290639 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:36.290649 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:36.290659 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:36.290670 | orchestrator | 2026-01-07 01:17:36.290679 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:36.290688 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:17:36.290695 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:17:36.290700 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:17:36.290706 | orchestrator | 2026-01-07 01:17:36.290712 | orchestrator | 2026-01-07 01:17:36.290718 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:36.290728 | orchestrator | Wednesday 07 January 2026 01:17:33 +0000 (0:00:09.899) 0:04:38.669 ***** 2026-01-07 01:17:36.290734 | orchestrator | =============================================================================== 2026-01-07 01:17:36.290739 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.55s 2026-01-07 01:17:36.290745 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.05s 2026-01-07 01:17:36.290766 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.62s 2026-01-07 01:17:36.290773 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.87s 2026-01-07 01:17:36.290779 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.81s 2026-01-07 01:17:36.290784 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.93s 2026-01-07 01:17:36.290790 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.90s 2026-01-07 01:17:36.290795 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.71s 2026-01-07 01:17:36.290801 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.36s 2026-01-07 01:17:36.290807 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.76s 2026-01-07 01:17:36.290812 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.72s 2026-01-07 01:17:36.290818 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.95s 2026-01-07 01:17:36.290824 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.70s 2026-01-07 01:17:36.290829 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.65s 2026-01-07 01:17:36.290835 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.65s 2026-01-07 01:17:36.290840 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.53s 2026-01-07 01:17:36.290846 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.33s 2026-01-07 01:17:36.290852 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.32s 2026-01-07 01:17:36.290857 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.32s 2026-01-07 01:17:36.290866 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.08s 2026-01-07 01:17:39.331170 | orchestrator | 2026-01-07 01:17:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:42.368871 | orchestrator | 2026-01-07 01:17:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:45.411329 | orchestrator | 2026-01-07 01:17:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:48.453913 | orchestrator | 2026-01-07 01:17:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:51.496444 | orchestrator | 2026-01-07 01:17:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:54.538465 | orchestrator | 2026-01-07 01:17:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:57.577359 | orchestrator | 2026-01-07 01:17:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:00.623139 | orchestrator | 2026-01-07 01:18:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:03.663945 | orchestrator | 2026-01-07 01:18:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:06.710792 | orchestrator | 2026-01-07 01:18:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:09.755511 | orchestrator | 2026-01-07 01:18:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:12.799736 | orchestrator | 2026-01-07 01:18:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:15.846870 | orchestrator | 2026-01-07 01:18:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:18.891059 | orchestrator | 2026-01-07 01:18:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:21.934291 | orchestrator | 2026-01-07 01:18:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:24.979674 | orchestrator | 2026-01-07 01:18:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:28.029664 | orchestrator | 2026-01-07 01:18:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:31.075287 | orchestrator | 2026-01-07 01:18:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:34.115104 | orchestrator | 2026-01-07 01:18:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:37.161518 | orchestrator | 2026-01-07 01:18:37.498233 | orchestrator | 2026-01-07 01:18:37.506330 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Jan 7 01:18:37 UTC 2026 2026-01-07 01:18:37.506558 | orchestrator | 2026-01-07 01:18:37.878115 | orchestrator | ok: Runtime: 0:35:31.365247 2026-01-07 01:18:38.140212 | 2026-01-07 01:18:38.140428 | TASK [Bootstrap services] 2026-01-07 01:18:39.060546 | orchestrator | 2026-01-07 01:18:39.060718 | orchestrator | # BOOTSTRAP 2026-01-07 01:18:39.060734 | orchestrator | 2026-01-07 01:18:39.060761 | orchestrator | + set -e 2026-01-07 01:18:39.060777 | orchestrator | + echo 2026-01-07 01:18:39.060786 | orchestrator | + echo '# BOOTSTRAP' 2026-01-07 01:18:39.060797 | orchestrator | + echo 2026-01-07 01:18:39.060821 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-07 01:18:39.071596 | orchestrator | + set -e 2026-01-07 01:18:39.071684 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-07 01:18:44.091118 | orchestrator | 2026-01-07 01:18:44 | INFO  | It takes a moment until task 400bf878-518d-463b-85bf-81256c855a63 (flavor-manager) has been started and output is visible here. 2026-01-07 01:18:51.937567 | orchestrator | 2026-01-07 01:18:47 | INFO  | Flavor SCS-1L-1 created 2026-01-07 01:18:51.937627 | orchestrator | 2026-01-07 01:18:48 | INFO  | Flavor SCS-1L-1-5 created 2026-01-07 01:18:51.937634 | orchestrator | 2026-01-07 01:18:48 | INFO  | Flavor SCS-1V-2 created 2026-01-07 01:18:51.937639 | orchestrator | 2026-01-07 01:18:48 | INFO  | Flavor SCS-1V-2-5 created 2026-01-07 01:18:51.937643 | orchestrator | 2026-01-07 01:18:48 | INFO  | Flavor SCS-1V-4 created 2026-01-07 01:18:51.937647 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-1V-4-10 created 2026-01-07 01:18:51.937651 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-1V-8 created 2026-01-07 01:18:51.937655 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-1V-8-20 created 2026-01-07 01:18:51.937666 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-4 created 2026-01-07 01:18:51.937670 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-4-10 created 2026-01-07 01:18:51.937674 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-8 created 2026-01-07 01:18:51.937677 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-8-20 created 2026-01-07 01:18:51.937681 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-16 created 2026-01-07 01:18:51.937685 | orchestrator | 2026-01-07 01:18:49 | INFO  | Flavor SCS-2V-16-50 created 2026-01-07 01:18:51.937689 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-8 created 2026-01-07 01:18:51.937692 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-8-20 created 2026-01-07 01:18:51.937696 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-16 created 2026-01-07 01:18:51.937702 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-16-50 created 2026-01-07 01:18:51.937709 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-32 created 2026-01-07 01:18:51.937715 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-4V-32-100 created 2026-01-07 01:18:51.937721 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-8V-16 created 2026-01-07 01:18:51.937727 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-8V-16-50 created 2026-01-07 01:18:51.937734 | orchestrator | 2026-01-07 01:18:50 | INFO  | Flavor SCS-8V-32 created 2026-01-07 01:18:51.937740 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-8V-32-100 created 2026-01-07 01:18:51.937746 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-16V-32 created 2026-01-07 01:18:51.937752 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-16V-32-100 created 2026-01-07 01:18:51.937758 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-2V-4-20s created 2026-01-07 01:18:51.937763 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-4V-8-50s created 2026-01-07 01:18:51.937769 | orchestrator | 2026-01-07 01:18:51 | INFO  | Flavor SCS-8V-32-100s created 2026-01-07 01:18:54.301387 | orchestrator | 2026-01-07 01:18:54 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-07 01:19:04.518773 | orchestrator | 2026-01-07 01:19:04 | INFO  | Task 7124ade4-bc51-4041-b503-e6ef038eabb5 (bootstrap-basic) was prepared for execution. 2026-01-07 01:19:04.519019 | orchestrator | 2026-01-07 01:19:04 | INFO  | It takes a moment until task 7124ade4-bc51-4041-b503-e6ef038eabb5 (bootstrap-basic) has been started and output is visible here. 2026-01-07 01:19:49.915095 | orchestrator | 2026-01-07 01:19:49.915175 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-07 01:19:49.915187 | orchestrator | 2026-01-07 01:19:49.915194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 01:19:49.915199 | orchestrator | Wednesday 07 January 2026 01:19:08 +0000 (0:00:00.067) 0:00:00.067 ***** 2026-01-07 01:19:49.915203 | orchestrator | ok: [localhost] 2026-01-07 01:19:49.915207 | orchestrator | 2026-01-07 01:19:49.915211 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-07 01:19:49.915215 | orchestrator | Wednesday 07 January 2026 01:19:10 +0000 (0:00:01.856) 0:00:01.924 ***** 2026-01-07 01:19:49.915219 | orchestrator | ok: [localhost] 2026-01-07 01:19:49.915223 | orchestrator | 2026-01-07 01:19:49.915227 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-07 01:19:49.915231 | orchestrator | Wednesday 07 January 2026 01:19:18 +0000 (0:00:07.836) 0:00:09.760 ***** 2026-01-07 01:19:49.915235 | orchestrator | changed: [localhost] 2026-01-07 01:19:49.915239 | orchestrator | 2026-01-07 01:19:49.915242 | orchestrator | TASK [Create public network] *************************************************** 2026-01-07 01:19:49.915246 | orchestrator | Wednesday 07 January 2026 01:19:25 +0000 (0:00:07.397) 0:00:17.158 ***** 2026-01-07 01:19:49.915250 | orchestrator | changed: [localhost] 2026-01-07 01:19:49.915254 | orchestrator | 2026-01-07 01:19:49.915258 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-07 01:19:49.915261 | orchestrator | Wednesday 07 January 2026 01:19:31 +0000 (0:00:05.521) 0:00:22.680 ***** 2026-01-07 01:19:49.915268 | orchestrator | changed: [localhost] 2026-01-07 01:19:49.915272 | orchestrator | 2026-01-07 01:19:49.915275 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-07 01:19:49.915279 | orchestrator | Wednesday 07 January 2026 01:19:37 +0000 (0:00:06.550) 0:00:29.230 ***** 2026-01-07 01:19:49.915283 | orchestrator | changed: [localhost] 2026-01-07 01:19:49.915287 | orchestrator | 2026-01-07 01:19:49.915290 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-07 01:19:49.915294 | orchestrator | Wednesday 07 January 2026 01:19:42 +0000 (0:00:04.342) 0:00:33.573 ***** 2026-01-07 01:19:49.915298 | orchestrator | changed: [localhost] 2026-01-07 01:19:49.915302 | orchestrator | 2026-01-07 01:19:49.915305 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-07 01:19:49.915313 | orchestrator | Wednesday 07 January 2026 01:19:46 +0000 (0:00:03.868) 0:00:37.442 ***** 2026-01-07 01:19:49.915317 | orchestrator | ok: [localhost] 2026-01-07 01:19:49.915321 | orchestrator | 2026-01-07 01:19:49.915325 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:19:49.915328 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:19:49.915333 | orchestrator | 2026-01-07 01:19:49.915336 | orchestrator | 2026-01-07 01:19:49.915340 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:19:49.915344 | orchestrator | Wednesday 07 January 2026 01:19:49 +0000 (0:00:03.502) 0:00:40.944 ***** 2026-01-07 01:19:49.915348 | orchestrator | =============================================================================== 2026-01-07 01:19:49.915351 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.84s 2026-01-07 01:19:49.915355 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.40s 2026-01-07 01:19:49.915359 | orchestrator | Set public network to default ------------------------------------------- 6.55s 2026-01-07 01:19:49.915363 | orchestrator | Create public network --------------------------------------------------- 5.52s 2026-01-07 01:19:49.915377 | orchestrator | Create public subnet ---------------------------------------------------- 4.34s 2026-01-07 01:19:49.915381 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.87s 2026-01-07 01:19:49.915385 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2026-01-07 01:19:49.915389 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2026-01-07 01:19:52.427843 | orchestrator | 2026-01-07 01:19:52 | INFO  | It takes a moment until task c64617bf-4347-481f-ac40-41be276e2e87 (image-manager) has been started and output is visible here. 2026-01-07 01:20:35.146338 | orchestrator | 2026-01-07 01:19:55 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-07 01:20:35.146480 | orchestrator | 2026-01-07 01:19:55 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-07 01:20:35.146508 | orchestrator | 2026-01-07 01:19:55 | INFO  | Importing image Cirros 0.6.2 2026-01-07 01:20:35.146525 | orchestrator | 2026-01-07 01:19:55 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:20:35.146542 | orchestrator | 2026-01-07 01:19:57 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:20:35.146559 | orchestrator | 2026-01-07 01:19:59 | INFO  | Waiting for import to complete... 2026-01-07 01:20:35.146576 | orchestrator | 2026-01-07 01:20:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-07 01:20:35.146593 | orchestrator | 2026-01-07 01:20:10 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-07 01:20:35.146610 | orchestrator | 2026-01-07 01:20:10 | INFO  | Setting internal_version = 0.6.2 2026-01-07 01:20:35.146626 | orchestrator | 2026-01-07 01:20:10 | INFO  | Setting image_original_user = cirros 2026-01-07 01:20:35.146643 | orchestrator | 2026-01-07 01:20:10 | INFO  | Adding tag os:cirros 2026-01-07 01:20:35.146660 | orchestrator | 2026-01-07 01:20:10 | INFO  | Setting property architecture: x86_64 2026-01-07 01:20:35.146677 | orchestrator | 2026-01-07 01:20:11 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:20:35.146694 | orchestrator | 2026-01-07 01:20:11 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:20:35.146711 | orchestrator | 2026-01-07 01:20:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:20:35.146727 | orchestrator | 2026-01-07 01:20:11 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:20:35.146745 | orchestrator | 2026-01-07 01:20:12 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:20:35.146761 | orchestrator | 2026-01-07 01:20:12 | INFO  | Setting property os_distro: cirros 2026-01-07 01:20:35.146777 | orchestrator | 2026-01-07 01:20:12 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:20:35.146794 | orchestrator | 2026-01-07 01:20:12 | INFO  | Setting property replace_frequency: never 2026-01-07 01:20:35.146810 | orchestrator | 2026-01-07 01:20:12 | INFO  | Setting property uuid_validity: none 2026-01-07 01:20:35.146826 | orchestrator | 2026-01-07 01:20:13 | INFO  | Setting property provided_until: none 2026-01-07 01:20:35.146842 | orchestrator | 2026-01-07 01:20:13 | INFO  | Setting property image_description: Cirros 2026-01-07 01:20:35.146858 | orchestrator | 2026-01-07 01:20:13 | INFO  | Setting property image_name: Cirros 2026-01-07 01:20:35.146873 | orchestrator | 2026-01-07 01:20:13 | INFO  | Setting property internal_version: 0.6.2 2026-01-07 01:20:35.146890 | orchestrator | 2026-01-07 01:20:14 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:20:35.146935 | orchestrator | 2026-01-07 01:20:14 | INFO  | Setting property os_version: 0.6.2 2026-01-07 01:20:35.146994 | orchestrator | 2026-01-07 01:20:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:20:35.147014 | orchestrator | 2026-01-07 01:20:14 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-07 01:20:35.147028 | orchestrator | 2026-01-07 01:20:14 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-07 01:20:35.147042 | orchestrator | 2026-01-07 01:20:14 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-07 01:20:35.147057 | orchestrator | 2026-01-07 01:20:14 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-07 01:20:35.147070 | orchestrator | 2026-01-07 01:20:15 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-07 01:20:35.147091 | orchestrator | 2026-01-07 01:20:15 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-07 01:20:35.147106 | orchestrator | 2026-01-07 01:20:15 | INFO  | Importing image Cirros 0.6.3 2026-01-07 01:20:35.147120 | orchestrator | 2026-01-07 01:20:15 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:20:35.147134 | orchestrator | 2026-01-07 01:20:17 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:20:35.147149 | orchestrator | 2026-01-07 01:20:19 | INFO  | Waiting for import to complete... 2026-01-07 01:20:35.147191 | orchestrator | 2026-01-07 01:20:29 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-07 01:20:35.147205 | orchestrator | 2026-01-07 01:20:29 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-07 01:20:35.147219 | orchestrator | 2026-01-07 01:20:29 | INFO  | Setting internal_version = 0.6.3 2026-01-07 01:20:35.147232 | orchestrator | 2026-01-07 01:20:29 | INFO  | Setting image_original_user = cirros 2026-01-07 01:20:35.147246 | orchestrator | 2026-01-07 01:20:29 | INFO  | Adding tag os:cirros 2026-01-07 01:20:35.147279 | orchestrator | 2026-01-07 01:20:30 | INFO  | Setting property architecture: x86_64 2026-01-07 01:20:35.147294 | orchestrator | 2026-01-07 01:20:30 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:20:35.147308 | orchestrator | 2026-01-07 01:20:30 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:20:35.147322 | orchestrator | 2026-01-07 01:20:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:20:35.147336 | orchestrator | 2026-01-07 01:20:31 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:20:35.147355 | orchestrator | 2026-01-07 01:20:31 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:20:35.147370 | orchestrator | 2026-01-07 01:20:31 | INFO  | Setting property os_distro: cirros 2026-01-07 01:20:35.147386 | orchestrator | 2026-01-07 01:20:31 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:20:35.147401 | orchestrator | 2026-01-07 01:20:32 | INFO  | Setting property replace_frequency: never 2026-01-07 01:20:35.147416 | orchestrator | 2026-01-07 01:20:32 | INFO  | Setting property uuid_validity: none 2026-01-07 01:20:35.147432 | orchestrator | 2026-01-07 01:20:32 | INFO  | Setting property provided_until: none 2026-01-07 01:20:35.147448 | orchestrator | 2026-01-07 01:20:32 | INFO  | Setting property image_description: Cirros 2026-01-07 01:20:35.147463 | orchestrator | 2026-01-07 01:20:33 | INFO  | Setting property image_name: Cirros 2026-01-07 01:20:35.147478 | orchestrator | 2026-01-07 01:20:33 | INFO  | Setting property internal_version: 0.6.3 2026-01-07 01:20:35.147509 | orchestrator | 2026-01-07 01:20:33 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:20:35.147524 | orchestrator | 2026-01-07 01:20:33 | INFO  | Setting property os_version: 0.6.3 2026-01-07 01:20:35.147540 | orchestrator | 2026-01-07 01:20:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:20:35.147555 | orchestrator | 2026-01-07 01:20:34 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-07 01:20:35.147575 | orchestrator | 2026-01-07 01:20:34 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-07 01:20:35.147599 | orchestrator | 2026-01-07 01:20:34 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-07 01:20:35.147624 | orchestrator | 2026-01-07 01:20:34 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-07 01:20:35.481912 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-07 01:20:37.909839 | orchestrator | 2026-01-07 01:20:37 | INFO  | date: 2026-01-06 2026-01-07 01:20:37.909908 | orchestrator | 2026-01-07 01:20:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:20:37.909982 | orchestrator | 2026-01-07 01:20:37 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:20:37.910130 | orchestrator | 2026-01-07 01:20:37 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2.CHECKSUM 2026-01-07 01:20:38.249259 | orchestrator | 2026-01-07 01:20:38 | INFO  | checksum: ccaeac20334f3bd9ba5bef5fa32ee255e2acf964566127f89d3d6aa5eef5b38f 2026-01-07 01:20:38.337163 | orchestrator | 2026-01-07 01:20:38 | INFO  | It takes a moment until task f7e03bb6-893c-4e3d-8470-beee135de057 (image-manager) has been started and output is visible here. 2026-01-07 01:23:04.877853 | orchestrator | 2026-01-07 01:20:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-06' 2026-01-07 01:23:04.877951 | orchestrator | 2026-01-07 01:20:40 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2: 200 2026-01-07 01:23:04.877962 | orchestrator | 2026-01-07 01:20:40 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-06 2026-01-07 01:23:04.877970 | orchestrator | 2026-01-07 01:20:40 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:23:04.877977 | orchestrator | 2026-01-07 01:20:42 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.877983 | orchestrator | 2026-01-07 01:20:44 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.877991 | orchestrator | 2026-01-07 01:20:54 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.877997 | orchestrator | 2026-01-07 01:21:04 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878003 | orchestrator | 2026-01-07 01:21:14 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878011 | orchestrator | 2026-01-07 01:21:24 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878063 | orchestrator | 2026-01-07 01:21:34 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878071 | orchestrator | 2026-01-07 01:21:45 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878077 | orchestrator | 2026-01-07 01:21:47 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878117 | orchestrator | 2026-01-07 01:21:49 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878146 | orchestrator | 2026-01-07 01:21:51 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878153 | orchestrator | 2026-01-07 01:21:53 | ERROR  | Image OpenStack Octavia Amphora 2026-01-06 seems stuck in queued state 2026-01-07 01:23:04.878161 | orchestrator | 2026-01-07 01:21:53 | WARNING  | Deleting stuck image OpenStack Octavia Amphora 2026-01-06 and retrying import 2026-01-07 01:23:04.878168 | orchestrator | 2026-01-07 01:21:53 | INFO  | Retry attempt 1/1 for image OpenStack Octavia Amphora 2026-01-06 2026-01-07 01:23:04.878174 | orchestrator | 2026-01-07 01:21:53 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878180 | orchestrator | 2026-01-07 01:21:55 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878186 | orchestrator | 2026-01-07 01:22:05 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878193 | orchestrator | 2026-01-07 01:22:15 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878199 | orchestrator | 2026-01-07 01:22:26 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878205 | orchestrator | 2026-01-07 01:22:36 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878211 | orchestrator | 2026-01-07 01:22:46 | INFO  | Waiting for import to complete... 2026-01-07 01:23:04.878217 | orchestrator | 2026-01-07 01:22:56 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878223 | orchestrator | 2026-01-07 01:22:58 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878229 | orchestrator | 2026-01-07 01:23:00 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878248 | orchestrator | 2026-01-07 01:23:02 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:23:04.878255 | orchestrator | 2026-01-07 01:23:04 | ERROR  | Image OpenStack Octavia Amphora 2026-01-06 seems stuck in queued state 2026-01-07 01:23:04.878261 | orchestrator | 2026-01-07 01:23:04 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-07 01:23:04.878267 | orchestrator | 2026-01-07 01:23:04 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-07 01:23:04.878273 | orchestrator | 2026-01-07 01:23:04 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-07 01:23:04.878279 | orchestrator | 2026-01-07 01:23:04 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-07 01:23:04.878285 | orchestrator | 2026-01-07 01:23:04.878301 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-01-07 01:23:05.391128 | orchestrator | ERROR 2026-01-07 01:23:05.391546 | orchestrator | { 2026-01-07 01:23:05.391675 | orchestrator | "delta": "0:04:26.583424", 2026-01-07 01:23:05.391747 | orchestrator | "end": "2026-01-07 01:23:05.230433", 2026-01-07 01:23:05.391800 | orchestrator | "msg": "non-zero return code", 2026-01-07 01:23:05.391848 | orchestrator | "rc": 1, 2026-01-07 01:23:05.391895 | orchestrator | "start": "2026-01-07 01:18:38.647009" 2026-01-07 01:23:05.391941 | orchestrator | } failure 2026-01-07 01:23:05.412982 | 2026-01-07 01:23:05.413326 | PLAY RECAP 2026-01-07 01:23:05.413523 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-01-07 01:23:05.413614 | 2026-01-07 01:23:05.651234 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-07 01:23:05.653907 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:23:06.462379 | 2026-01-07 01:23:06.462574 | PLAY [Post output play] 2026-01-07 01:23:06.480010 | 2026-01-07 01:23:06.480167 | LOOP [stage-output : Register sources] 2026-01-07 01:23:06.552439 | 2026-01-07 01:23:06.552776 | TASK [stage-output : Check sudo] 2026-01-07 01:23:07.478793 | orchestrator | sudo: a password is required 2026-01-07 01:23:07.593047 | orchestrator | ok: Runtime: 0:00:00.019633 2026-01-07 01:23:07.611212 | 2026-01-07 01:23:07.611546 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-07 01:23:07.654573 | 2026-01-07 01:23:07.654909 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-07 01:23:07.724370 | orchestrator | ok 2026-01-07 01:23:07.737413 | 2026-01-07 01:23:07.737579 | LOOP [stage-output : Ensure target folders exist] 2026-01-07 01:23:08.226505 | orchestrator | ok: "docs" 2026-01-07 01:23:08.226861 | 2026-01-07 01:23:08.487242 | orchestrator | ok: "artifacts" 2026-01-07 01:23:08.752669 | orchestrator | ok: "logs" 2026-01-07 01:23:08.774211 | 2026-01-07 01:23:08.774389 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-07 01:23:08.825232 | 2026-01-07 01:23:08.825697 | TASK [stage-output : Make all log files readable] 2026-01-07 01:23:09.142106 | orchestrator | ok 2026-01-07 01:23:09.148957 | 2026-01-07 01:23:09.149111 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-07 01:23:09.183553 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:09.194864 | 2026-01-07 01:23:09.195036 | TASK [stage-output : Discover log files for compression] 2026-01-07 01:23:09.220036 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:09.231578 | 2026-01-07 01:23:09.231760 | LOOP [stage-output : Archive everything from logs] 2026-01-07 01:23:09.272357 | 2026-01-07 01:23:09.272593 | PLAY [Post cleanup play] 2026-01-07 01:23:09.281403 | 2026-01-07 01:23:09.281581 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:23:09.350072 | orchestrator | ok 2026-01-07 01:23:09.361841 | 2026-01-07 01:23:09.361984 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:23:09.396468 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:09.412829 | 2026-01-07 01:23:09.413012 | TASK [Clean the cloud environment] 2026-01-07 01:23:10.071441 | orchestrator | 2026-01-07 01:23:10 - clean up servers 2026-01-07 01:23:10.857829 | orchestrator | 2026-01-07 01:23:10 - testbed-manager 2026-01-07 01:23:10.946145 | orchestrator | 2026-01-07 01:23:10 - testbed-node-2 2026-01-07 01:23:11.045206 | orchestrator | 2026-01-07 01:23:11 - testbed-node-3 2026-01-07 01:23:11.132789 | orchestrator | 2026-01-07 01:23:11 - testbed-node-0 2026-01-07 01:23:11.226677 | orchestrator | 2026-01-07 01:23:11 - testbed-node-4 2026-01-07 01:23:11.327923 | orchestrator | 2026-01-07 01:23:11 - testbed-node-1 2026-01-07 01:23:11.414782 | orchestrator | 2026-01-07 01:23:11 - testbed-node-5 2026-01-07 01:23:11.519543 | orchestrator | 2026-01-07 01:23:11 - clean up keypairs 2026-01-07 01:23:11.543827 | orchestrator | 2026-01-07 01:23:11 - testbed 2026-01-07 01:23:11.573063 | orchestrator | 2026-01-07 01:23:11 - wait for servers to be gone 2026-01-07 01:23:22.525868 | orchestrator | 2026-01-07 01:23:22 - clean up ports 2026-01-07 01:23:23.150237 | orchestrator | 2026-01-07 01:23:23 - 2035d6d1-b9c7-47fe-86fa-bb0a81f9de86 2026-01-07 01:23:23.385492 | orchestrator | 2026-01-07 01:23:23 - 33a01b7a-ddbe-4605-b56b-31cf1018caca 2026-01-07 01:23:23.643750 | orchestrator | 2026-01-07 01:23:23 - 4ee82844-0e99-497c-bb55-1ccf7c9513ed 2026-01-07 01:23:23.884483 | orchestrator | 2026-01-07 01:23:23 - 7d12bb3a-b391-44b6-acc3-b664d8173559 2026-01-07 01:23:24.098473 | orchestrator | 2026-01-07 01:23:24 - b51a637f-6413-466b-ae0e-415620da5781 2026-01-07 01:23:24.292835 | orchestrator | 2026-01-07 01:23:24 - cfd14098-d8d7-4970-afb8-6e15cb8c62d7 2026-01-07 01:23:24.684486 | orchestrator | 2026-01-07 01:23:24 - ef7803ef-13d3-4463-86e4-71bbf2d2edf0 2026-01-07 01:23:24.882417 | orchestrator | 2026-01-07 01:23:24 - clean up volumes 2026-01-07 01:23:25.008667 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-1-node-base 2026-01-07 01:23:25.058288 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-5-node-base 2026-01-07 01:23:25.098560 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-3-node-base 2026-01-07 01:23:25.139920 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-2-node-base 2026-01-07 01:23:25.180688 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-manager-base 2026-01-07 01:23:25.220841 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-4-node-base 2026-01-07 01:23:25.264409 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-1-node-4 2026-01-07 01:23:25.310949 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-4-node-4 2026-01-07 01:23:25.352066 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-0-node-3 2026-01-07 01:23:25.400306 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-0-node-base 2026-01-07 01:23:25.442893 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-8-node-5 2026-01-07 01:23:25.487306 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-5-node-5 2026-01-07 01:23:25.531480 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-3-node-3 2026-01-07 01:23:25.578699 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-6-node-3 2026-01-07 01:23:25.617547 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-2-node-5 2026-01-07 01:23:25.660563 | orchestrator | 2026-01-07 01:23:25 - testbed-volume-7-node-4 2026-01-07 01:23:25.703268 | orchestrator | 2026-01-07 01:23:25 - disconnect routers 2026-01-07 01:23:25.849246 | orchestrator | 2026-01-07 01:23:25 - testbed 2026-01-07 01:23:26.825078 | orchestrator | 2026-01-07 01:23:26 - clean up subnets 2026-01-07 01:23:26.879472 | orchestrator | 2026-01-07 01:23:26 - subnet-testbed-management 2026-01-07 01:23:27.035042 | orchestrator | 2026-01-07 01:23:27 - clean up networks 2026-01-07 01:23:27.177662 | orchestrator | 2026-01-07 01:23:27 - net-testbed-management 2026-01-07 01:23:27.471676 | orchestrator | 2026-01-07 01:23:27 - clean up security groups 2026-01-07 01:23:27.519586 | orchestrator | 2026-01-07 01:23:27 - testbed-management 2026-01-07 01:23:27.801587 | orchestrator | 2026-01-07 01:23:27 - testbed-node 2026-01-07 01:23:27.801643 | orchestrator | 2026-01-07 01:23:27 - clean up floating ips 2026-01-07 01:23:27.801654 | orchestrator | 2026-01-07 01:23:27 - 81.163.193.57 2026-01-07 01:23:28.171958 | orchestrator | 2026-01-07 01:23:28 - clean up routers 2026-01-07 01:23:28.323645 | orchestrator | 2026-01-07 01:23:28 - testbed 2026-01-07 01:23:29.468199 | orchestrator | ok: Runtime: 0:00:19.424045 2026-01-07 01:23:29.470263 | 2026-01-07 01:23:29.470356 | PLAY RECAP 2026-01-07 01:23:29.470415 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-07 01:23:29.470459 | 2026-01-07 01:23:29.630227 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:23:29.632252 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:23:30.414102 | 2026-01-07 01:23:30.414292 | PLAY [Cleanup play] 2026-01-07 01:23:30.432359 | 2026-01-07 01:23:30.432546 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:23:30.491469 | orchestrator | ok 2026-01-07 01:23:30.500861 | 2026-01-07 01:23:30.501043 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:23:30.537772 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:30.555553 | 2026-01-07 01:23:30.555750 | TASK [Clean the cloud environment] 2026-01-07 01:23:31.889690 | orchestrator | 2026-01-07 01:23:31 - clean up servers 2026-01-07 01:23:32.360689 | orchestrator | 2026-01-07 01:23:32 - clean up keypairs 2026-01-07 01:23:32.377461 | orchestrator | 2026-01-07 01:23:32 - wait for servers to be gone 2026-01-07 01:23:32.421974 | orchestrator | 2026-01-07 01:23:32 - clean up ports 2026-01-07 01:23:32.489988 | orchestrator | 2026-01-07 01:23:32 - clean up volumes 2026-01-07 01:23:32.553505 | orchestrator | 2026-01-07 01:23:32 - disconnect routers 2026-01-07 01:23:32.583749 | orchestrator | 2026-01-07 01:23:32 - clean up subnets 2026-01-07 01:23:32.602843 | orchestrator | 2026-01-07 01:23:32 - clean up networks 2026-01-07 01:23:32.730485 | orchestrator | 2026-01-07 01:23:32 - clean up security groups 2026-01-07 01:23:32.768048 | orchestrator | 2026-01-07 01:23:32 - clean up floating ips 2026-01-07 01:23:32.796651 | orchestrator | 2026-01-07 01:23:32 - clean up routers 2026-01-07 01:23:33.096125 | orchestrator | ok: Runtime: 0:00:01.321788 2026-01-07 01:23:33.100452 | 2026-01-07 01:23:33.100621 | PLAY RECAP 2026-01-07 01:23:33.100762 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-07 01:23:33.100841 | 2026-01-07 01:23:33.236716 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:23:33.237818 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:23:34.024938 | 2026-01-07 01:23:34.025130 | PLAY [Base post-fetch] 2026-01-07 01:23:34.041795 | 2026-01-07 01:23:34.041974 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-07 01:23:34.107999 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:34.119336 | 2026-01-07 01:23:34.119550 | TASK [fetch-output : Set log path for single node] 2026-01-07 01:23:34.177370 | orchestrator | ok 2026-01-07 01:23:34.187186 | 2026-01-07 01:23:34.187370 | LOOP [fetch-output : Ensure local output dirs] 2026-01-07 01:23:34.696469 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/logs" 2026-01-07 01:23:34.999908 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/artifacts" 2026-01-07 01:23:35.293282 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c61f2e5dbb054357b308a9fc4c27d52b/work/docs" 2026-01-07 01:23:35.318282 | 2026-01-07 01:23:35.318585 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-07 01:23:36.293854 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:23:36.294231 | orchestrator | changed: All items complete 2026-01-07 01:23:36.294296 | 2026-01-07 01:23:37.042017 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:23:37.888123 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:23:37.916249 | 2026-01-07 01:23:37.916416 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-07 01:23:37.948149 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:37.951407 | orchestrator | skipping: Conditional result was False 2026-01-07 01:23:37.980578 | 2026-01-07 01:23:37.981545 | PLAY RECAP 2026-01-07 01:23:37.981674 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-07 01:23:37.981720 | 2026-01-07 01:23:38.140149 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:23:38.143398 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:23:38.951240 | 2026-01-07 01:23:38.951417 | PLAY [Base post] 2026-01-07 01:23:38.966739 | 2026-01-07 01:23:38.966939 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-07 01:23:39.980582 | orchestrator | changed 2026-01-07 01:23:39.990742 | 2026-01-07 01:23:39.990918 | PLAY RECAP 2026-01-07 01:23:39.990995 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-07 01:23:39.991067 | 2026-01-07 01:23:40.127706 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:23:40.128800 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-07 01:23:41.013932 | 2026-01-07 01:23:41.014116 | PLAY [Base post-logs] 2026-01-07 01:23:41.025787 | 2026-01-07 01:23:41.025958 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-07 01:23:41.530519 | localhost | changed 2026-01-07 01:23:41.541048 | 2026-01-07 01:23:41.541225 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-07 01:23:41.582755 | localhost | ok 2026-01-07 01:23:41.590003 | 2026-01-07 01:23:41.590195 | TASK [Set zuul-log-path fact] 2026-01-07 01:23:41.621301 | localhost | ok 2026-01-07 01:23:41.639725 | 2026-01-07 01:23:41.641854 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 01:23:41.684825 | localhost | ok 2026-01-07 01:23:41.693537 | 2026-01-07 01:23:41.693753 | TASK [upload-logs : Create log directories] 2026-01-07 01:23:42.267637 | localhost | changed 2026-01-07 01:23:42.274242 | 2026-01-07 01:23:42.274458 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-07 01:23:42.820870 | localhost -> localhost | ok: Runtime: 0:00:00.007836 2026-01-07 01:23:42.827974 | 2026-01-07 01:23:42.828178 | TASK [upload-logs : Upload logs to log server] 2026-01-07 01:23:43.481730 | localhost | Output suppressed because no_log was given 2026-01-07 01:23:43.485691 | 2026-01-07 01:23:43.485893 | LOOP [upload-logs : Compress console log and json output] 2026-01-07 01:23:43.546598 | localhost | skipping: Conditional result was False 2026-01-07 01:23:43.552181 | localhost | skipping: Conditional result was False 2026-01-07 01:23:43.559570 | 2026-01-07 01:23:43.559796 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-07 01:23:43.609320 | localhost | skipping: Conditional result was False 2026-01-07 01:23:43.609909 | 2026-01-07 01:23:43.613999 | localhost | skipping: Conditional result was False 2026-01-07 01:23:43.620215 | 2026-01-07 01:23:43.620382 | LOOP [upload-logs : Upload console log and json output]